Chef aws driver tags don't work using Etc.getlogin - ruby

I am currently using Chef solo on a Windows machine. I used the fog driver before which created tags for my instances on AWS. Recently, I moved to the aws driver and noticed that aws driver does not handle tagging. I tried writing my own code to create the tags. One of the tags being "Owner" which tells me who created the instance. For this, I am using the following code:
def get_admin_machine_options()
case get_provisioner()
when "cccis-environments-aws"
general_machine_options = {ssh_username: "root",
create_timeout: 7000,
use_private_ip_for_ssh: true,
aws_tags: {Owner: Etc.getlogin.to_s}
}
general_bootstrap_options = {
key_name: KEY_NAME,
image_id: "AMI",
instance_type: "m3.large",
subnet_id: "subnet",
security_group_ids: ["sg-"],
}
bootstrap_options = Chef::Mixin::DeepMerge.hash_only_merge(general_bootstrap_options,{})
return Chef::Mixin::DeepMerge.hash_only_merge(general_machine_options, {bootstrap_options: bootstrap_options})
else
raise "Unknown provisioner #{get_setting('CHEF_PROFILE')}"
end
end
machine admin_name do
recipe "random.rb"
machine_options get_admin_machine_options()
ohai_hints ohai_hints
action $provisioningAction
end
Now, this works fine on my machine. The instance is created on my machine with proper tags but when I run the same code on someone else's machine. It doesn't create the tags at all. I find this to be very weird. Does anyone know what's happening? I have the same code!

Okay so I found the issue. I was using the gem chef-provisioning-aws 1.2.1 and everyone else was on 1.1.1
the gem 1.1.1 does not have support for tagging so it just went right past it.
I uninstalled the old gem and installed the new one. It worked like a charm!

Related

Rails multi_db sharding middleware not running in production

I have this in my multi_db.rb file:
Rails.application.configure do
config.active_record.shard_selector = { lock: true }
config.active_record.shard_resolver = ->(request) {
puts "MULTI_DB: subdomain = #{request.subdomain}"
return request.subdomain == "fr" ? "french": "default"
}
end
Pretty straightforward, trying to route to a different shard based on language. And this works fine locally. Every time I issue a request, I see my puts above print the the debug line. But in prod, I don't see this at all, this code is simply not running.
What could I be missing?
Well, this looked to be a version mismatch. I was actually running rails 7.0.4 locally, but prod was running 7.0.2.4. When I updated prod to 7.0.4, things worked fine. Not sure if there was a problem with initializers in 7.0.2.4, or just some version shenanigans.

List all the declared packages in chef

I'm working on a infrastructure where some servers don't have access to the internet, so I have to push the packages to the local repo before declaring them to be installed on Chef.
However we've been on a situation where Chef failed to install a package since the package wasn't there on some boxes and it has been successful on some other boxes.
What I want to do is to run a Ruby/RSpec test before applying Chef config on the nodes to make sure the packages declared on the recipes do actually exist on the repo.
In order to do that I need to be able to list all the packages exists in the our recipes.
My question is: Is there anyway to list all the declared packages in Chef? I had a quick look at Chef::Platform and ChefSpec but unfortunately couldn't find anything useful to my problem.
Do you have any idea where is the best place to look at?
If you use ChefSpec you can find all the packages by calling chef_run.find_resources(:package) inside some test. See the source code. Like this:
require 'chefspec'
describe 'example::default' do
let(:chef_run) { ChefSpec::Runner.new.converge(described_recipe) }
it 'does something' do
chef_run.find_resources(:package)...
end
end
You could install one or more of the community ohai plugins. For example the following will return information about installed sofware:
debian
Redhat
windows
Once the plugins are enabled they will add additional node attributes that will be searchable from chef-server.

Unable to use application_ruby cookbook with Chef 11.8.0, Cannot find a resource for bundle_options

I have been attempting to setup a chef recipe which installs ruby using RVM and then uses the application_ruby cookbook to configure the application, however I keep running into the error
NameError: Cannot find a resource for bundle_options on ubuntu version 12.04
I am using the following code
application "application setup" do
owner "ubuntu"
group "ubuntu"
repository "https://github.com/me/myapplication.git" // Real address removed
path rails_app_path
revision "master"
rails do
bundler true
precompile_assets true
bundler_deployment true
end
end
I noticed that the bundle_options was recently added, https://github.com/opscode-cookbooks/application_ruby/commit/e7719170a661a957796e8e5d58ba8f4ecd937487 however I am unable to track down if this is causing the issue. I have included
depends "application"
depends "application_ruby"
in my metadata.rb and made sure all my dependencies are installed so I am unsure what I am doing wrong at this point.
According to documentation bundle_options is an attribute of the rails resource, not a resource itself.
The only correct way of using it is INSIDE the "rails" block, so you got the message because you either used it as :
an attribute of the application resource (but outside of the "rails" block)
standalone resource (outside of any resource).
Message you mentioned is being displayed when nonexistent resource is being referenced. e.g. if you had tried to execute following code on your system:
nonexistent_resource "failure gonna happen" do
some_attribute "whatever_value"
end
you would've got a message
NameError: Cannot find a resource for nonexistent_resource on Ubuntu version 12.04
I ran into this problem today as well. It appears the problem is that commit e771917 forgot to add the necessary getter for the bundle_option. Someone filed a PR to fix it (https://github.com/poise/application_ruby/pull/44), but it has not yet been merged. I can confirm that when I made that change locally, this error went away. The forked branch in the PR is located at https://github.com/mauriciosilva/application_ruby/tree/bundle_options_fix.

bluepill not detecting that processes have, in fact, started successfully, and so creates new ones

I have one (EC2) Ubuntu server where bluepill is working just fine to start and monitoring resque processes (and it has done so on other nodes in the past).
I'm setting up a new node, and for some reason on this node bluepill does not recognize that the processes have started and are running, and so keeps creating new ones. I'm a little baffled by what's causing this. The 2 nodes are almost identical; they're both EC2 servers provisioned by the same chef scripts. It is true that the one not working is 'production' and the other 'staging', but there's almost no difference due to that.
Any thoughts or suggestions before I fork the github project and start inserting more monitoring, to try and figure out what's going on? There's been discussion on this list in the past about troubles w/ bluepill and resque, but as I said this is working fine on my staging server, and has worked fine on earlier production servers (although I will note that this new production server is ruby 1.9.3 (vs 1.9.2) and rails 3.2 (vs. 3.1)).
Here's my .pill file (or more specifically, my chef cookbook's template file):
ENV["RAILS_ENV"] = "<%= node.chef_environment %>"
ENV["QUEUE"] = "*"
Bluepill.application("zmx_app") do |app|
app.working_dir = "/srv/zmx/current"
app.uid = "root"
app.gid = "root"
2.times do |i|
app.process("resque-#{i}") do |process|
process.group = "resque"
process.start_command = "rake resque:work"
process.pid_file = "/srv/zmx/current/tmp/pids/resque_workers-#{i}.pid"
process.stop_command = "kill -QUIT {{PID}}"
process.daemonize = true
end
end
end
This turned out to be a bug in bluepill, which I have forked, fixed, and submitted a pull request.
And I'm not sure why I didn't realize that there was, in fact, a difference between my two environments: staging/old prod was on bluepill 0.0.55, my new production environment on 0.0.58.

Puppet Server and Client working Good but still manifest file doesn't get executed

I am currently working on puppet using Amazon Fedora EC2 instances. Both Puppet Server and Client are working fine. I am able to create certificate from client and server is able to sign that but still whatever code I have written in manifest files doesn’t get executed.
Below mentioned is my code in Site.pp file :
class test_class {
file { “/tmp/testfile”:
ensure => present,
mode => 644,
owner => root,
group => root
}
}
node puppetclient {
include test_class
}
Here, puppetclient is the hostname of client. But still after signing certificate /tmp/testfile doesn’t get created.
DNS is also working perfectly fine. I can ping puppetserver(named as puppet) from puppet client.
Can you please tell me what must be the possible mistake ??
It's probably just a typo in the question, but the default catalog file is 'site.pp', not 'Site.pp', so try it with 'site.pp' instead.

Resources