I have a Capfile for Multistage deploys that needs to deploy the code to one server (NFS) and finally restart several application servers. Roles can therefore not be used easily since application servers do not need to be used for deploy:update_code. I have come up with something that might work, but have a issue that needs to be resolved.
application_servers = nil
task :production do
role :nfs, "nfs.someserver.net"
application_servers = "app.someserver.net"
end
task :staging do
role :nfs, "nfs-staging.someserver.net"
application_servers = "app-staging.someserver.net"
end
desc "tail resin logs #{resin_logs}"
task :tail, :hosts => application_servers do
puts("Server is:"#{application_servers})
stream "tail -f #{resin_logs}"
end
And when running:
#$ cap staging tail
* executing `staging'
* executing `tail'
Server is:app-staging.someserver.net
* executing "tail -f /log/resin/*.log"
servers: ["nfs-staging.someserver.net"]
[nfs-staging.someserver.net] executing command
tail: cannot open `/log/resin/*.log' for reading: No such file or directory
tail: no files remaining
command finished
failed: "sh -c 'tail -f /log/resin/*.log'" on nfs-staging.someserver.net
When printing value of application_servers in task tail it says "app-staging.someserver.net", but the value used in :hosts => application_servers is empty (which is why it uses the role nfs instead).
Why does the variable application_server have two different values? Is it scope issue? I have tried with global ($) and that does not work as well.
Solved the issue just by changing from using :hosts to :roles on application specific task and added a new role. The key feature is to use no_release so that the code is not deployed to application servers. We only want to restart resin instance on those machines.
task :production do
role :nfs, "nfs.someserver.net"
role :application, "app.someserver.net", :no_release => true;
end
task :staging do
role :nfs, "nfs-staging.someserver.net"
role :application, "app-staging.someserver.net", :no_release => true;
end
desc "tail resin logs #{resin_logs}"
task :tail, :roles => application_servers do
puts("Server is:"#{application_servers})
stream "tail -f #{resin_logs}"
end
Related
So, I have a rake task that runs as a chron task (its running on windows so this is rufus scheduler instead), this particular rake task sends data via HTTP to a server. If a specifically defined error is sent back in the HTTP response body another rake task is ran with the system command.
Unfortunately this is ran on windows. the rake task is connecting to a database and sending data while each row in the DB is being read so the following error is being thrown, so its saying the "above path" is the current directory, and it doesnt see the rake task as an available command
C:\Users\ALilland\Documents\macros\experiments\core_scripts\app>rake sage_time_records:import
'\\SAGE\TIMBERLINE OFFICE\9.5\ACCOUNTING\GLOBAL'
CMD.EXE was started with the above path as the current directory.
UNC paths are not supported. Defaulting to Windows directory.
rake aborted!
No Rakefile found (looking for: rakefile, Rakefile, rakefile.rb, Rakefile.rb)
(See full trace by running task with --trace)
Rake Task
task :import do
## establish connection
# ...
## Print out each row
while row = db_query.fetch do
# ...
## define the payload of what will be sent to API
# ...
begin
url = "#{ENV['API']}/time_records"
print "Sending Time Record to API ... "
response = RestClient.post(url, payload, {accept: :json})
puts 'success' if response.code == 201
rescue RestClient::ExceptionWithResponse => error
puts 'failed' if error.response.code != 201
error_body = JSON.parse(error.response.body)
next if error_body['error'] == 'Job Number not found'
## calling the other rake task and passing a variable
system("rake sage_field_employees:import[\"#{mapped[:employee]}\"]") if error_body['error'] == 'Field Employee not found'
# ...
end
end
# ...
end
I am using "user" resource to define the user within Chef.
Now, I want to retrieve the value of UID and GID for the same user.
Can this be done using the ruby code within chef?
Right now, I am trying to use bash resource and running the following command:
id -u $USERNAME
You can use the automatic attributes contributed by Ohai.
That information is accessible via
# uid
node['etc']['passwd']['$USERNAME']['uid']
# gid
node['etc']['passwd']['$USERNAME']['gid']
From the command line, you can explore the attributes as follows:
$ ohai etc/passwd/vagrant
{
"dir": "/home/vagrant",
"gid": 900,
"uid": 900,
"shell": "/bin/bash",
"gecos": "vagrant,,,"
}
$ ohai etc/passwd/vagrant/uid
900
If you create a user during the chef run and want to access its information within the same chef run, you probably have to trigger a reload of the responsible ohai plugin. (It might be possible that triggers this automatically, but I wouldn't expect so.)
ohai 'reload passwd' do
plugin 'passwd'
action :reload
end
user 'john' do
action :reload, 'ohai[reload passwd]'
end
For this you can use Ruby's Etc class: http://ruby-doc.org/stdlib-2.1.2/libdoc/etc/rdoc/Etc.html#method-c-getpwnam
[15] pry(main)> Etc.getpwnam('david').uid
=> 501
[16] pry(main)> Etc.getpwnam('david').gid
=> 20
I am trying to use Vagrant to create an Ubuntu VM. My host is Windows7 and my basebox is precise64.
If I do the recommended way of adding a user in Puppet like this:
user { "johnboy":
ensure => present,
managehome => true,
password => '$6$ev8faya2$M2pB3YQRpKUJMnJx6LnsyTbDdi.umsEEZttD01pk8ZSfMGrVmlnjoVhIHyuqYt3.yaG1SZjaoSxB39nNgFKb//',
groups => ["admin"],
shell => "/bin/bash";
}
I log in after vagrant has provisioned my box and the hash is not in /etc/passwd.
If I don't set it with the resource type but use exec and usermod like this
user { "johnboy":
ensure => present,
managehome => true,
groups => ["admin"],
shell => "/bin/bash";
}
exec { 'set password':
command => "usermod -p '$6$ev8faya2$M2pB3YQRpKUJMnJx6LnsyTbDdi.umsEEZttD01pk8ZSfMGrVmlnjoVhIHyuqYt3.yaG1SZjaoSxB39nNgFKb//' johnboy",
require => User[johnboy];
}
I end up with only part of the hash in /etc/passwd
johnboy:.umsEEZttD01pk8ZSfMGrVmlnjoVhIHyuqYt3.yaG1SZjaoSxB39nNgFKb//:16101:0:99999:7:::
Some pages suggest installing ruby-shadow so I tried this:
gem install ruby-shadow
However, the install failed, probably because I don't have Ruby installed. Vagrant was a 100 MB download. Is the gem for managing passwords really not included with that?
How do I get Vagrant/Puppet to provision the password correctly?
That's because it's stored inside the /etc/shadow file. This is for security reasons as it is only accessible by the root/super user.
Escape the dollar signs in the hash like this and it should work.
exec { 'set password':
command => "usermod -p '\$6\$ev8faya2\$M2pB3YQRpKUJMnJx6LnsyTbDdi.umsEEZttD01pk8ZSfMGrVmlnjoVhIHyuqYt3.yaG1SZjaoSxB39nNgFKb//' johnboy",
require => User[johnboy];
}
I try to create tasks with different roles :
namespace :foo do
task :mytasks, :roles => [:a, :b,] do
task_a
task_b
end
task :task_a, :roles => :a do
run 'echo A'
end
task :task_b, :roles => :b do
run 'echo B'
end
end
When i execute 'mytasks' here is the result :
$ cap -n ROLES=b foo:mytasks
* 2013-03-01 16:59:14 executing `foo:mytasks'
* executing "echo A"
* executing "echo B"
All tasks get executed, why ?
Capistrano Roles are intended to associate a given server (or multiple servers) with a particular function, such as saying "machine-a" is a web server while "machine-b" is a database server, which is useful because certain tasks only need to be performed on certain machines.
So roles are not intended to be a way to conditionally select which machine(s) to run tasks on at the time when you are running Capistrano, they simply select which tasks should be run on which machines.
There is, however, another Capistrano feature called Multistage that may be what you're looking for. It allows you to specify different sets of servers (and even associate them with different roles) based on the "stage" you're deploying to. So you could have a and b stages, each with separate sets of servers, which you could deploy using:
cap a foo:mytasks
cap b foo:mytasks
I have to create a script to manage maintenance pages server for my hosting company.
I will need to do a CLI interface that would act like this (example scenario) :
(here, let's suppose that mcli is the name of the script, 1.1.1.1 the original server address (that host the website, www.exemple.com)
Here I just create the loopback interface on the maintenance server with the original ip address and create the nginx site-specific config file in sites-enabled
$ mcli register www.exemple.com 1.1.1.1
[DEBUG] Adding IP 1.1.1.1 to new loopback interface lo:001001001001
[WARNING] No root directory specified, setting default maintenance page.
[DEBUG] Registering www.exemple.com maintenance page and reloading Nginx: OK
Then when I want to enable the maintenance page and completely shutdown the website:
$ mcli maintenance www.exemple.com
[DEBUG] Connecting to router with SSH: OK
[DEBUG] Setting new route to 1.1.1.1 to maintenance server: OK
[DEBUG] Writing configuration: Ok
Then removing the maintenance page:
$ mcli nomaintenance www.exemple.com
[DEBUG] Connecting to router with SSH: OK
[DEBUG] Removing route to 1.1.1.1: Ok
[DEBUG] Writing configuration: Ok
And I would need a function to see the actual states of the websites
$ mcli list
+------------------+-----------------+------------------+
| Site Name | Server I.P | Maintenance mode |
+------------------+-----------------+------------------+
| www.example.com | 1.1.1.1 | Enabled |
| www.example.org | 1.1.1.2 | Disabled |
+------------------+-----------------+------------------+
$ mcli show www.example.org
Site Name: www.example.org
Server I.P: 1.1.1.1
Maintenance Mode: Enabled
Root Directory : /var/www/maintenance/default/
But I never did this kind of scripting with Ruby. What gems do you recommend for this kind of things ? For command line parsing ? Column/Colorized output ? SSH connection (needed to connect to cisco routers)
Do you recommend me to use a local database (sqlite) to store meta datas (Stages changes, actual states) or do you recommend me to compute on the fly by analyzing nginx/interfaces configuration files and using syslog for monitoring changes done with this script ?
This script will be used at first time for a massive datacenter physical migration, and next for standard usages for scheduled downtimes.
Thank you
First of all, I'd recommend you get a copy of Build awesome command-line applications in Ruby.
That said, you might want to check
GLI command line parsing like git
OptionParser command line parsing
Personally, I'd go for the SQLite approach for storing data, but I'm biased (having a strong SQL background).
Thor is a good gem for handling CLI options. It allows this type of organization in your script:
class Maintenance < Thor
desc "maintenance", "put up maintenance page"
method_option :switch, :aliases => '-s', :type => 'string'
#The method name is the name of the task that would be run => mcli maintenance
def maintenance
#do stuff
end
no_tasks do
#methods that you don't want cli tasks for go here
end
end
Maintenance.start
I don't really have any good suggestions for column/colorized output though.
I definitely recommend using some kind of a database to store states though. Maybe not sqlite, I would probably opt for maybe a redis database that stores key/value pairs with the information you are looking for.
We have similar task. I use next architecture
Small application (C) what generate config file
Add nginx init.d script new switch update_clusters. This script will restart nginx only if config file is changed
update_clusters() {
${CONF_GEN} --outfile=/tmp/nginx_clusters.conf
RETVAL=$?
if [[ "$RETVAL" != "0" ]]; then
return 5
fi
if ! diff ${CLUSTER_CONF_FILE} /tmp/nginx_clusters.conf > /dev/null; then
echo "Cluster configuration changed. Reload service"
mv -f /tmp/nginx_clusters.conf ${CLUSTER_CONF_FILE}
reload
fi
}
Set of bash scripts to add records to database.
Web console to add/modify/delete records in database (extjs+nginx module)