I try to create a chef cookbook to launch multiple mpd instances in my vagrant virtual box (using chef-solo).
I want to configure each instance in my Vagrantfile like this:
mpd: {
channels: {
mix: {
name: 'mpd_mix',
bind: '0.0.0.0',
socket: '/home/vagrant/.mpd/socket/mix',
port: '6600'
},
tech: {
name: 'mpd_tech',
bind: '0.0.0.0',
socket: '/home/vagrant/.mpd/socket/tech',
port: '6601'
}
}
}
So the recipe should take these settings and loop through them (creating an mpd instance for each channel).
This is what I currently have as a recipe:
package "mpd"
node.normal[:mpd][:channels].each_value do |channel|
# create socket
file channel[:socket] do
action :touch
end
# trying to set the attributes for the config file and the service
node.set[:mpd][:port] = channel[:port]
node.set[:mpd][:db_file] = "/var/lib/mpd/tag_cache_" + channel[:name]
node.set[:mpd][:bind_2] = channel[:socket]
node.set[:mpd][:icecast_mountpoint] = "/" + channel[:name] + ".mp3"
node.set[:mpd][:channel_name] = channel[:name]
# create service
service channel[:name] do
service_name "mpd" # linux service command
action :enable
end
# create the corresponding config file
config_filename = "/etc/" + channel[:name] + ".conf"
template config_filename do
source "mpd.conf.erb"
mode "0644"
notifies :restart, resources(:service => channel[:name])
end
end
I have several Problems with this:
Ist does not create a system service for each mpd instance, so I can do sudo service mpd_mix start. Why?
It does not use the /etc/mpd_mix.conf config file when launching mpd, because it still calls /etc/init.d/mpd start which uses /etc/mpd.conf. How can I change that, so it uses the correct config file for each mpd instance?
Adjusting the attributes for the creation of the config files does not work as expected (see the node.set part in the code above). Both config files, /etc/mpd_tech.conf and /etc/mpd_mix.conf use the tech channel attributes. Looks like the mix settings get overwritten somehow? How can I fix that?
I'd really appreciate some help on this as I am quite new to chef cookbooks.
I figured out how to do it. Here is the relevant code part:
node[:mpd][:channels].each_value do |channel|
# create socket
file channel[:socket] do
action :touch
end
# create init file
init_filename = "/etc/init.d/" + channel[:name]
template init_filename do
variables :channel => channel
source "mpd.init.erb"
mode "0755"
end
# create service
service channel[:name] do
service_name channel[:name] # linux service command
action :enable
end
# create config file
config_filename = "/etc/" + channel[:name] + ".conf"
template config_filename do
variables :channel => channel
source "mpd.conf.erb"
mode "0644"
notifies :restart, resources(:service => channel[:name])
end
end
If you want to take a closer look, take a look at the complete cookbook repository on github: https://github.com/i42n/chef-cookbook-mpd/blob/master/recipes/default.rb
Related
I have a recipe that iterates a hash containing SQL scripts in an each method and -- in case the script changed from the previous run -- the cookbook_file resource notifies the execute resource to run.
The issue is that it seems it always runs the execute using the last element of the hash.
Following the attributes file
default['sql_scripts_dir'] = 'C:\\DBScripts'
default['script_runner']['scripts'] = [
{ 'name' => 'test', 'hostname' => 'local' },
{ 'name' => 'test2', 'hostname' => 'local' },
{ 'name' => 'test3', 'hostname' => 'local' },
{ 'name' => 'test4', 'hostname' => 'local' },
]
And the recipe
directory node['sql_scripts_dir'] do
recursive true
end
node['script_runner']['scripts'].each do |script|
cookbook_file "#{node['sql_scripts_dir']}\\#{script['name']}.sql" do
source "#{script['name']}.sql"
action :create
notifies :run, 'execute[Create_scripts]', :immediately
end
execute 'Create_scripts' do
command "sqlcmd -S \"#{script['hostname']}\" -i \"#{node['sql_scripts_dir']}\\#{script['name']}.sql\""
action :nothing
end
end
And it produces the following output:
Recipe: test_runner::default
* directory[C:\DBScripts] action create
- create new directory C:\DBScripts
* cookbook_file[C:\DBScripts\test.sql] action create
- create new file C:\DBScripts\test.sql
- update content in file C:\DBScripts\test.sql from none to 8c40f1
--- C:\DBScripts\test.sql 2020-07-30 17:30:30.959220400 +0000
+++ C:\DBScripts/chef-test20200730-1500-11bz3an.sql 2020-07-30 17:30:30.959220400 +0000
## -1 +1,2 ##
+select ##version
* execute[Create_scripts] action run
================================================================================
Error executing action `run` on resource 'execute[Create_scripts]'
================================================================================
Mixlib::ShellOut::ShellCommandFailed
------------------------------------
Expected process to exit with [0], but received '1'
---- Begin output of sqlcmd -S "local" -i "C:\DBScripts\test4.sql" ----
STDOUT:
STDERR: Sqlcmd: 'C:\DBScripts\test4.sql': Invalid filename.
---- End output of sqlcmd -S "local" -i "C:\DBScripts\test4.sql" ----
Ran sqlcmd -S "local" -i "C:\DBScripts\test4.sql" returned 1
The expected behavior is that the recipe runs sequentially the 4 scripts in the example instead of running just the last one. What am I missing for getting it done?
You are creating 4 nearly identical resources all named execute[Create_scripts] and when the notification fires from the first cookbook_file resource being updated it finds the last one of them to be notified and runs against test4 (no matter which cookbook_file resource updates).
The fix is to use string interpolation to change the name of the execute resources to be unique and to notify based on that unique name:
directory node['sql_scripts_dir'] do
recursive true
end
node['script_runner']['scripts'].each do |script|
cookbook_file "#{node['sql_scripts_dir']}\\#{script['name']}.sql" do
source "#{script['name']}.sql"
action :create
notifies :run, "execute[create #{script['name']} scripts]", :immediately
end
execute "create #{script['name']} scripts" do
command "sqlcmd -S \"#{script['hostname']}\" -i \"#{node['sql_scripts_dir']}\\#{script['name']}.sql\""
action :nothing
end
end
Note that this is a manifestation of the same issues behind the old CHEF-3694 warning message where what would happen is that all the four execute resources would be merged into one resource via "resource cloning" with the properties of the subsequent resource being "last-writer-wins".
In Chef 13 this was changed to remove resource cloning and the warning, and in most circumstances having two resources named the same thing in the resource collection is totally harmless -- until you try to notify one of those resources. The resource notification system should really warn in this situation rather than silently picking the last resource that matches (but between notifications, subscribes, lazy resolution and now unified_mode that code is very complicated and you only want it to be firing under exactly the right conditions).
I am using rails 5.2, Shrine 2.19 and tus server 2.3 for resumable file upload
routes.rb
mount Tus::Server => '/files'
model, file_resource.rb
class FileResource < ApplicationRecord
# adds an `file` virtual attribute
include ResumableFileUploader::Attachment.new(:file)
controllers/files_controller.rb
def create
file = FileResource.new(permitted_params)
...
file.save
config/initializers/shrine.rb
s3_options = {
bucket: ENV['S3_MEDIA_BUCKET_NAME'],
access_key_id: ENV['S3_ACCESS_KEY'],
secret_access_key: ENV['S3_SECRET_ACCESS_KEY'],
region: ENV['S3_REGION']
}
Shrine.storages = {
cache: Shrine::Storage::S3.new(prefix: 'file_library/shrine_cache', **s3_options),
store: Shrine::Storage::S3.new(**s3_options), # public: true,
tus: Shrine::Storage::Tus.new
}
Shrine.plugin :activerecord
Shrine.plugin :cached_attachment_data
config/initializers/tus.rb
Tus::Server.opts[:storage] = Tus::Storage::S3.new(
prefix: 'file_library',
bucket: ENV['S3_MEDIA_BUCKET_NAME'],
access_key_id: ENV['S3_ACCESS_KEY'],
secret_access_key: ENV['S3_SECRET_ACCESS_KEY'],
region: ENV['S3_REGION'],
retry_limit: 3
)
Tus::Server.opts[:redirect_download] = true
My issue is I cannot override the generate_location method of Shrine class to store the files in different folder structure in AWS s3.
All the files are created inside s3://bucket/file_library/ (the prefix provided in tus.rb). I want something like s3://bucket/file_library/:user_id/:parent_id/ folder structure.
I found that Tus configuration overrides all my resumable_file_uploader class custom options and have no effect on uploading.
resumable_file_uploader.rb
class ResumableFileUploader < Shrine
plugin :validation_helpers # NOT WORKS
plugin :pretty_location # NOT WORKS
def generate_location(io, context = {}) # NOT WORKS
f = context[:record]
name = super # the default unique identifier
puts "<<<<<<<<<<<<<<<<<<<<<<<<<<<<"*10
['users', f.author_id, f.parent_id, name].compact.join('/')
end
Attacher.validate do # NOT WORKS
validate_max_size 15 * 1024 * 1024, message: 'is too large (max is 15 MB)'
end
end
So how can I create custom folder structure in S3 using tus options (as shrine options not works)?
A tus server upload doesn't touch Shrine at all, so the #generate_location won't be called, but instead the tus-ruby-server decides the location.
Note that the tus server should only act as a temporary storage, you should still use Shrine to copy the file to a permanent storage (aka "promote"), just like with regular direct uploads. On promotion the #generate_location method will be called, so the file will be copied to the desired location; this all happens automatically with default Shrine setup.
I am testing adding a watermark to a video once uploaded. I am running into an issue where lamdba wants me to specify which file to change on upload. but i want it to trigger when any (really, any file that ends in .mov, .mp4, etc.) file is uploaded.
To clarify, this all works manually in creating a pipeline and job.
Here's my code:
require 'json'
require 'aws-sdk-elastictranscoder'
def lambda_handler(event:, context:)
client = Aws::ElasticTranscoder::Client.new(region: 'us-east-1')
resp = client.create_job({
pipeline_id: "15521341241243938210-qevnz1", # required
input: {
key: File, #this is where my issue
},
output: {
key: "CBtTw1XLWA6VSGV8nb62gkzY",
# thumbnail_pattern: "ThumbnailPattern",
# thumbnail_encryption: {
# mode: "EncryptionMode",
# key: "Base64EncodedString",
# key_md_5: "Base64EncodedString",
# initialization_vector: "ZeroTo255String",
# },
# rotate: "Rotate",
preset_id: "1351620000001-000001",
# segment_duration: "FloatString",
watermarks: [
{
preset_watermark_id: "TopRight",
input_key: "uploads/2354n.jpg",
# encryption: {
# mode: "EncryptionMode",
# key: "zk89kg4qpFgypV2fr9rH61Ng",
# key_md_5: "Base64EncodedString",
# initialization_vector: "ZeroTo255String",
# },
},
],
}
})
end
How do i specify just any file that is uploaded, or files that are a specific format? for the input: key: ?
Now, my issue is that i am using active storage so it doesn't end in .jpg or .mov, etc., it just is a random generated string (they have reasons for doing this). I am trying to find a reason to use active storage and this is my final step to making it work like other alternatives before it.
The extension field is Optional. If you don't specify anything in it, the lambda will be triggered no matter what file is uploaded. You can then check if it's the type of file you want and proceed.
I'm writing a wrapper cookbook for nexus3 wherein I override the default attributes like so in the attributes/default.rb file of my cookbook
# Nexus Options
node.default['nexus3']['properties_variables'] = { port: '8383', host: '0.0.0.0', args: '${jetty.etc}/jetty.xml,${jetty.etc}/jetty-http.xml,${jetty.etc}/jetty-requestlog.xml', context_path: '/nexus/' }
node.default['nexus3']['api']['host'] = 'http://localhost:8383'
node.default['nexus3']['api']['username'] = 'admin'
node.default['nexus3']['api']['password'] = 'Ch5f#A4min'
While Chef does install nexus3 with the override properties, property values for the nexus3_api fail to take effect during cookbook run, as I see in the logs
==> provisioner: * execute[wait for http://localhost:8081/service/siesta/rest/v1/script to respond] action run
==> provisioner: [2018-06-11T05:58:17+00:00] INFO: Processing execute[wait for http://localhost:8081/service/siesta/rest/v1/script to respond] action run (/opt/chef/embedded/lib/ruby/gems/2.5.0/gems/chef-14.2.0/lib/chef/resource.rb line 1285)
==> provisioner: [2018-06-11T05:58:17+00:00] INFO: Processing execute[wait for http://localhost:8081/service/siesta/rest/v1/script to respond] action run (/opt/chef/embedded/lib/ruby/gems/2.5.0/gems/chef-14.2.0/lib/chef/resource.rb line 1285)
I'm running this cookbook through vagrant chef provision and my Vagrant file is as follows
# -*- mode: ruby -*-
# vi: set ft=ruby :
# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure("2") do |config|
config.vm.define "provisioner" do |provisioner|
provisioner.vm.box = "ubuntu/xenial64"
provisioner.vm.box_version = "20180509.0.0"
provisioner.vm.box_check_update = false
provisioner.omnibus.chef_version = :latest
provisioner.vm.network "forwarded_port", guest: 8080, host: 8282
provisioner.vm.network "forwarded_port", guest: 8383, host: 8383
provisioner.vm.provider :virtualbox do |vbox|
vbox.name = "pipeline-jumpstart-chef"
vbox.memory = 2048
vbox.cpus = 2
end
provisioner.vm.provision "chef_solo" do |chef|
chef.node_name = "chef-provisioned"
chef.cookbooks_path = "../../cookbooks"
chef.verbose_logging = true
chef.add_recipe "pipeline-jumpstart-chef"
end
end
end
here's the source for cookbook on which I'm building wrapper
You mention your are overriding the attributes but your code indicates you are setting those attributes to the default level. You should review the Attribute Precedence in Chef to understand what default means exactly. In addition, inside the attributes file you don't need to prefix with node just use default::
default['nexus3']['properties_variables'] = { port: '8383', host: '0.0.0.0', args: '${jetty.etc}/jetty.xml,${jetty.etc}/jetty-http.xml,${jetty.etc}/jetty-requestlog.xml', context_path: '/nexus/' }
default['nexus3']['api']['host'] = 'http://localhost:8383'
default['nexus3']['api']['username'] = 'admin'
default['nexus3']['api']['password'] = 'Ch5f#A4min'
The node.default syntax is used inline, inside a recipe to set attributes. If you review the precedence chart you'll notice inline and default attributes are one level higher.
If you want to use override you can do this for each attribute:
override['nexus3']['properties_variables'] = { port: '8383', host: '0.0.0.0', args: '${jetty.etc}/jetty.xml,${jetty.etc}/jetty-http.xml,${jetty.etc}/jetty-requestlog.xml', context_path: '/nexus/' }
override['nexus3']['api']['host'] = 'http://localhost:8383'
override['nexus3']['api']['username'] = 'admin'
override['nexus3']['api']['password'] = 'Ch5f#A4min'
However, unless it's absolutely necessary to set these attributes in the wrapper cookbook you are likely better off setting this as a default attribute at a higher precedence, such as a role. See the quote below from the same document' Attribute Types section about override attributes:
An override attribute is automatically reset at the start of every
chef-client run and has a higher attribute precedence than default,
force_default, and normal attributes. An override attribute is most
often specified in a recipe, but can be specified in an attribute
file, for a role, and/or for an environment. A cookbook should be
authored so that it uses override attributes only when required.
If you simply set these as default inside your wrapper cookbook's attributes/default.rb file then both the source cookbook and your wrapper are trying to set the same attribute at the same level. This is likely going to lead to unexpected behavior or simply not work.
I am writing a chef resource with a logic as mentioned with each steps
Search for a 'zip' content from the http website and download it
After downloading unzip the files and put it under a directory - for e.g /u01/var/
Now here comes the tricky part - for each downloaded zip file i need
to traverse through each file and do the same operation which is
applicable to different zip files
My Code -
require 'open-uri'
links = open(patch_depot ,&:read).to_s
out1=links.scan(/\s*\s*"([^"]*zip)"/imu).flatten
patch_files = out1.select{ |i| i[/\.zip$/]}
print patch_files
unless patch_files.length>=1
Chef::Log.info('No latest file found..!!')
else
c = Dir.pwd
Dir.chdir(cache_direc)
patch_files.each do |patch|
if ::File.exist?(::File.join(cache_direc,patch))
Chef::Log.info("#{patch} file is already downloaded")
else
open(patch, 'wb') do |fo|
fo.print open("#{patch_depot}/#{patch}").read
end
`unzip -qo #{patch}`
Chef::Log.info("#{patch} is downloaded and extracted")
FileUtils.chown_R osuser, usergroup, cache_direc
FileUtils.chmod_R 0777, cache_direc
end
So with the code mentioned above i am able to achieve point 1 and point 2
After this code block i have a ruby block which updates a file and i have a bash block which do some operation.
Like below -
ruby_block 'edit bsu.sh file' do
block do
bsu_sh_file = Chef::Util::FileEdit.new("#{bsu_loc}/utils/asu/mmy.sh")
bsu_sh_file.search_file_replace_line(/^MEM_ARGS="-Xms256m -Xmx512m"*$/, "MEM_ARGS='-Xms1024m -Xmx1024m'")
if bsu_sh_file.unwritten_changes? == false
Chef::Log.info('No Changes need be made to mmy_sh_file')
else
bsu_sh_file.write_file
Chef::Log.info('Changes made in mmy_sh_file file')
end
end
end
bash block -
bash "test bash" do
action:run
code <<-EOH
some operation
EOH
end
My HTTP content may have multiple zip files
So for each zip file i need to unzip and do the operations mentioned on ruby_block and bash block
Kindly provide a solution or a suggestion
EDIT #1 :
The Code written which is already a custom resource , I know i mess up with the loop some where. My code doesn't not moving inside the loop and iterating through the other actions.
Assuming you have a list of urls from where you want to download, you can have some attributes like:
default['your_cookbook']['files']['file_1'] = {
address: 'http://whatever.com/file1.zip',
name: 'file1.zip',
path: '/where/you/want/it/',
checksum: '12341234123412341234'
}
default['your_cookbook']['files']['file_2'] = {
address: 'http://whatever.com/file2.zip',
name: 'file2.zip',
path: '/where/you/want/it/',
checksum: '12341234123412341234'
}
Then you can do this:
node['your_cookbook']['files'].each do |file|
remote_file file.name do
path "/tmp/#{file.name}"
source file.address
checksum file.checksum
end
execute "extract_#{file.name}" do
command "unzip /tmp/#{file.name} -d #{file.path}#{file.name}"
action :nothing
subscribe :run, "remote_file[#{file.name}]", :immediate
end
end
I think this have the same functionality you need and avoid using custom resources in favor of the default ones, which are quite complete.