How to resolve Mixlib::ShellOut::ShellCommandFailed error - ruby

I am trying to mock the data of node and run chefspec. But facing this error. Chefspec version=7.3.4
Chef version=13.9.1
Is there any problem with the simulation? how do I resolve the handler error?
Is there something I'm missing here? How can I debug the Mixlib::ShellOut::CommandTimeout?
================================================================================
Error executing action `install` on resource 'python_package[setuptools]'
================================================================================
Mixlib::ShellOut::ShellCommandFailed
------------------------------------
Expected process to exit with [0], but received '1'
---- Begin output of ["/usr/bin/python2.7", "-", "setuptools==40.5.0"] ----
STDOUT:
STDERR: No handlers could be found for logger "pip._internal.index"
Traceback (most recent call last):
File "<stdin>", line 43, in <module>
File "/usr/local/lib/python2.7/dist-packages/pip/_internal/index.py", line 543, in find_requirement
'No matching distribution found for %s' % req
pip._internal.exceptions.DistributionNotFound: No matching distribution found for setuptools==40.5.0
---- End output of ["/usr/bin/python2.7", "-", "setuptools==40.5.0"] ----
Ran ["/usr/bin/python2.7", "-", "setuptools==40.5.0"] returned 1
Resource Declaration:
---------------------
# In /chef-recipes/cookbooks/poise-python/files/halite_gem/poise_python/python_providers/base.rb
138: python_package 'setuptools' do
139: parent_python new_resource
140: version setuptools_version if setuptools_version.is_a?(String)
141: end
142: end
Compiled Resource:
------------------
# Declared in /chef-recipes/cookbooks/poise-python/files/halite_gem/poise_python/python_providers/base.rb:138:in `install_setuptools'
python_package("setuptools") do
package_name "setuptools"
action [:install]
default_guard_interpreter :default
declared_type :python_package
cookbook_name :"poise-python"
parent_python python_runtime[2]
version "40.5.0"
timeout 900
end
System Info:
------------
chef_version=13.9.1
platform=ubuntu
platform_version=16.04
ruby=ruby 2.4.2p198 (2017-09-14 revision 59899) [x86_64-linux]
program_name=/usr/local/rvm/gems/ruby-2.4.2/bin/rspec
executable=/usr/local/rvm/gems/ruby-2.4.2/bin/ruby_executable_hooks

Related

Spawning child processing on HPC using slurm

I encountered a problem when I wanted to spawn child processes on HPC using slurm script. The parent process is a python script, and the child process is a C++ program. Now I put the source code below:
parent process:
# parent process : mpitest.py
from mpi4py import MPI
sub_comm = MPI.COMM_SELF.Spawn('test_mpi', args=[], maxprocs=x)
child process:
// child process: test_mpi.cpp, in which parallelization is implemented using boost::mpi
#include <boost/mpi/environment.hpp>
#include <boost/mpi/communicator.hpp>
#include <boost/mpi.hpp>
#include <iostream>
using namespace boost;
int main(int argc, char* argv[]){
boost::mpi::environment env(argc, argv);
boost::mpi::communicator world;
int commrank;
MPI_Comm_rank(MPI_COMM_WORLD, &commrank);
std::cout << commrank << std::endl;
return 0;
}
the slurm script:
#!/bin/bash
#SBATCH --job-name=mpitest
#SBATCH --partition=day
#SBATCH -N 2
#SBATCH -n 4
#SBATCH -c 6
#SBATCH --mem 5G
#SBATCH -t 01-00:00:00
#SBATCH --output="mpitest.out"
#SBATCH --error="mpitest.error"
#run program
module load Boost/1.74.0-gompi-2020b
#the MPI version is: OpenMPI/4.0.5
mpirun -np y python mpitest.py
In the above code, there are 2 parameters x (maxprocs in MPI.COMM_SELF.Spawn) and y (-np in mpirun). The slurm script only ran normally when x = 1, y = 1. However, when I tried to increase x and y, the following error occurred (x = 1, y = 2):
[c05n04:13182] pml_ucx.c:178 Error: Failed to receive UCX worker address: Not found (-13)
[c05n04:13182] [[31687,1],1] ORTE_ERROR_LOG: Error in file dpm/dpm.c at line 493
[c05n11:07942] pml_ucx.c:178 Error: Failed to receive UCX worker address: Not found (-13)
[c05n11:07942] [[31687,2],0] ORTE_ERROR_LOG: Error in file dpm/dpm.c at line 493
Traceback (most recent call last):
File "mpitest.py", line 4, in <module>
sub_comm = MPI.COMM_SELF.Spawn('test_mpi', args=[], maxprocs=1)
File "mpi4py/MPI/Comm.pyx", line 1534, in mpi4py.MPI.Intracomm.Spawn
mpi4py.MPI.Exception: MPI_ERR_OTHER: known error not in list
--------------------------------------------------------------------------
It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort. There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems. This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):
ompi_dpm_dyn_init() failed
--> Returned "Error" (-1) instead of "Success" (0)
--------------------------------------------------------------------------
[c05n11:07942] *** An error occurred in MPI_Init
[c05n11:07942] *** reported by process [2076639234,0]
[c05n11:07942] *** on a NULL communicator
[c05n11:07942] *** Unknown error
[c05n11:07942] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
[c05n11:07942] *** and potentially your MPI job)
Similarly, when x = 2, y = 2, the following error occurred:
--------------------------------------------------------------------------
All nodes which are allocated for this job are already filled.
--------------------------------------------------------------------------
Traceback (most recent call last):
File "mpitest.py", line 4, in <module>
sub_comm = MPI.COMM_SELF.Spawn('test_mpi', args=[], maxprocs=2)
File "mpi4py/MPI/Comm.pyx", line 1534, in mpi4py.MPI.Intracomm.Spawn
mpi4py.MPI.Exception: MPI_ERR_SPAWN: could not spawn processes
[c18n08:16481] pml_ucx.c:178 Error: Failed to receive UCX worker address: Not found (-13)
[c18n08:16481] [[54742,1],0] ORTE_ERROR_LOG: Error in file dpm/dpm.c at line 493
[c18n11:01329] pml_ucx.c:178 Error: Failed to receive UCX worker address: Not found (-13)
[c18n11:01329] [[54742,2],0] ORTE_ERROR_LOG: Error in file dpm/dpm.c at line 493
[c18n11:01332] pml_ucx.c:178 Error: Failed to receive UCX worker address: Not found (-13)
[c18n11:01332] [[54742,2],1] ORTE_ERROR_LOG: Error in file dpm/dpm.c at line 493
Traceback (most recent call last):
File "mpitest.py", line 4, in <module>
sub_comm = MPI.COMM_SELF.Spawn('test_mpi', args=[], maxprocs=2)
File "mpi4py/MPI/Comm.pyx", line 1534, in mpi4py.MPI.Intracomm.Spawn
mpi4py.MPI.Exception: MPI_ERR_OTHER: known error not in list
--------------------------------------------------------------------------
It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort. There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems. This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):
ompi_dpm_dyn_init() failed
--> Returned "Error" (-1) instead of "Success" (0)
--------------------------------------------------------------------------
[c18n11:01332] *** An error occurred in MPI_Init
[c18n11:01332] *** reported by process [3587571714,1]
[c18n11:01332] *** on a NULL communicator
[c18n11:01332] *** Unknown error
[c18n11:01332] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
[c18n11:01332] *** and potentially your MPI job)
[c18n08:16469] 1 more process has sent help message help-mpi-runtime.txt / mpi_init:startup:internal-failure
[c18n08:16469] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
[c18n08:16469] 1 more process has sent help message help-mpi-errors.txt / mpi_errors_are_fatal unknown handle
What I want to do is to use several nodes to run the C++ program in parallel using this python script, i.e. x = 1, y > 1. What should I modify in the python code or the slurm script?
Update: according to the advice by
Gilles Gouaillarde, I modified the slurm script:
#!/bin/bash
#SBATCH --job-name=mpitest
#SBATCH --partition=day
#SBATCH -N 2
#SBATCH -n 4
#SBATCH -c 6
#SBATCH --mem 5G
#SBATCH -t 01-00:00:00
#SBATCH --output="mpitest.out"
#SBATCH --error="mpitest.error"
#run program
module load Boost/1.74.0-gompi-2020b
#the MPI version is: OpenMPI/4.0.5
mpirun --mca pml ^ucx --mca btl ^ucx --mca osc ^ucx -np 2 python mpitest.py
It still shows the following error: (here mpirun -np 2 and maxprocs=1 were used)
[c18n08:21555] [[49573,1],0] ORTE_ERROR_LOG: Not found in file dpm/dpm.c at line 493
[c18n10:03381] [[49573,3],0] ORTE_ERROR_LOG: Not found in file dpm/dpm.c at line 493
Traceback (most recent call last):
File "mpitest.py", line 4, in <module>
sub_comm = MPI.COMM_SELF.Spawn('test_mpi', args=[], maxprocs=1)
File "mpi4py/MPI/Comm.pyx", line 1534, in mpi4py.MPI.Intracomm.Spawn
mpi4py.MPI.Exception: MPI_ERR_INTERN: internal error
[c18n08:21556] [[49573,1],1] ORTE_ERROR_LOG: Not found in file dpm/dpm.c at line 493
[c18n10:03380] [[49573,2],0] ORTE_ERROR_LOG: Not found in file dpm/dpm.c at line 493
Traceback (most recent call last):
File "mpitest.py", line 4, in <module>
sub_comm = MPI.COMM_SELF.Spawn('test_mpi', args=[], maxprocs=1)
File "mpi4py/MPI/Comm.pyx", line 1534, in mpi4py.MPI.Intracomm.Spawn
mpi4py.MPI.Exception: MPI_ERR_INTERN: internal error
--------------------------------------------------------------------------
It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort. There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems. This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):
ompi_dpm_dyn_init() failed
--> Returned "Not found" (-13) instead of "Success" (0)
--------------------------------------------------------------------------
[c18n10:03380] *** An error occurred in MPI_Init
[c18n10:03380] *** reported by process [3248816130,0]
[c18n10:03380] *** on a NULL communicator
[c18n10:03380] *** Unknown error
[c18n10:03380] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
[c18n10:03380] *** and potentially your MPI job)
[c18n08:21542] 1 more process has sent help message help-mpi-runtime.txt / mpi_init:startup:internal-failure
[c18n08:21542] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
[c18n08:21542] 1 more process has sent help message help-mpi-errors.txt / mpi_errors_are_fatal unknown handle

Recipe Compile Error in /var/chef/cache/cookbooks/ambari/recipes/blueprints.rb

Recipe Compile Error in /var/chef/cache/cookbooks/ambari/recipes/blueprints.rb
NoMethodError
-------------
undefined method `[]' for nil:NilClass
Cookbook Trace:
---------------
/var/chef/cache/cookbooks/ambari/recipes/blueprints.rb:54:in `block in from_file'
/var/chef/cache/cookbooks/ambari/recipes/blueprints.rb:53:in `from_file'
/var/chef/cache/cookbooks/compat_resource/files/lib/chef_compat /monkeypatches/chef/run_context.rb:347:in `load_recipe'
Relevant File Content:
----------------------
/var/chef/cache/cookbooks/ambari/recipes/blueprints.rb:
47: end
48: end
49:
50: basic_auth_parameters = "--user #{node['ambari']['admin_user']}:#{node['ambari']['admin_password']}"
51:
52:
53: file '/tmp/blueprint.json' do
54>> content Chef::JSONCompat.to_json_pretty(node['ambari']['blueprints']['blueprint_json'].to_hash)
55: end
56:
57: file '/tmp/cluster.json' do
58: content Chef::JSONCompat.to_json_pretty(node['ambari']['blueprints']['cluster_json'].to_hash)
59: end
60:
61: execute 'Init Blueprints' do
62: command "curl #{basic_auth_parameters} -H 'X-Requested-By:ambari-cookbook' --data #/tmp/blueprint.json #{ambari_server_fqdn}:8080/api/v1/blueprints/#{node['ambari']['blueprints']['blueprint_name']}"
63: end
Platform:
---------
x86_64-linux
Running handlers:
[2016-11-27T13:15:03-06:00] ERROR: Running exception handlers
Running handlers complete
[2016-11-27T13:15:03-06:00] ERROR: Exception handlers complete
Chef Client failed. 0 resources updated in 09 seconds
[2016-11-27T13:15:03-06:00] FATAL: Stacktrace dumped to /var/chef/cache/chef-stacktrace.out
[2016-11-27T13:15:03-06:00] FATAL: Please provide the contents of the stacktrace.out file if you file a bug report
[2016-11-27T13:15:03-06:00] ERROR: undefined method `[]' for nil:NilClass
[2016-11-27T13:15:03-06:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)
One of the intermediary keys in your node attribute is not set. Either put a default value in your cookbook's attributes file or otherwise ensure that it is set.

AWS OpsWorks S3 403 RestClient::Forbidden

I am new with using Chef. I did create a very simple cookbook as zip file to S3 but I get this error always
[2015-12-07T10:29:53+00:00] INFO: Not needed with Chef 11.x (x >= 8) anymore.
[2015-12-07T10:29:53+00:00] INFO: Processing package[git] action install (opsworks_custom_cookbooks::checkout line 21)
[2015-12-07T10:29:54+00:00] INFO: Processing package[perl-Digest-HMAC] action install (opsworks_custom_cookbooks::checkout line 22)
[2015-12-07T10:29:54+00:00] INFO: Processing package[unzip] action install (opsworks_custom_cookbooks::checkout line 24)
[2015-12-07T10:29:54+00:00] INFO: Processing template[/root/.s3curl] action create (opsworks_custom_cookbooks::checkout line 24)
[2015-12-07T10:29:54+00:00] INFO: template[/root/.s3curl] created file /root/.s3curl
[2015-12-07T10:29:54+00:00] INFO: template[/root/.s3curl] updated file contents /root/.s3curl
[2015-12-07T10:29:54+00:00] INFO: template[/root/.s3curl] mode changed to 600
[2015-12-07T10:29:54+00:00] INFO: Processing directory[/tmp/opsworks20151207-2439-1lgn8x6] action create (opsworks_custom_cookbooks::checkout line 32)
[2015-12-07T10:29:54+00:00] INFO: directory[/tmp/opsworks20151207-2439-1lgn8x6] mode changed to 755
[2015-12-07T10:29:54+00:00] INFO: Processing s3_file[/tmp/opsworks20151207-2439-1lgn8x6/archive] action create (opsworks_custom_cookbooks::checkout line 38)
[2015-12-07T10:29:54+00:00] INFO: Processing chef_gem[rest-client] action install (s3_file::dependencies line 1)
[2015-12-07T10:29:54+00:00] WARN: #<RestClient::RawResponse:0x0055dd47315478 #net_http_res=#<Net::HTTPForbidden 403 Forbidden readbody=true>, #args={:method=>"GET", :url=>"https://jb-chef-cookbook.s3.amazonaws.com/cookbooks+3.zip", :raw_response=>true}, #file=#<Tempfile:/tmp/rest-client20151207-2439-1b2rwtr>, #code=403>
[2015-12-07T10:29:59+00:00] WARN: #<RestClient::RawResponse:0x0055dd474094d8 #net_http_res=#<Net::HTTPForbidden 403 Forbidden readbody=true>, #args={:method=>"GET", :url=>"https://jb-chef-cookbook.s3.amazonaws.com/cookbooks+3.zip", :raw_response=>true}, #file=#<Tempfile:/tmp/rest-client20151207-2439-pvv1v>, #code=403>
[2015-12-07T10:30:04+00:00] WARN: #<RestClient::RawResponse:0x0055dd474b7268 #net_http_res=#<Net::HTTPForbidden 403 Forbidden readbody=true>, #args={:method=>"GET", :url=>"https://jb-chef-cookbook.s3.amazonaws.com/cookbooks+3.zip", :raw_response=>true}, #file=#<Tempfile:/tmp/rest-client20151207-2439-5e98l1>, #code=403>
[2015-12-07T10:30:09+00:00] WARN: #<RestClient::RawResponse:0x0055dd481acf18 #net_http_res=#<Net::HTTPForbidden 403 Forbidden readbody=true>, #args={:method=>"GET", :url=>"https://jb-chef-cookbook.s3.amazonaws.com/cookbooks+3.zip", :raw_response=>true}, #file=#<Tempfile:/tmp/rest-client20151207-2439-10607v8>, #code=403>
[2015-12-07T10:30:14+00:00] WARN: #<RestClient::RawResponse:0x0055dd48279f90 #net_http_res=#<Net::HTTPForbidden 403 Forbidden readbody=true>, #args={:method=>"GET", :url=>"https://jb-chef-cookbook.s3.amazonaws.com/cookbooks+3.zip", :raw_response=>true}, #file=#<Tempfile:/tmp/rest-client20151207-2439-1659itz>, #code=403>
[2015-12-07T10:30:19+00:00] FATAL: #<RestClient::RawResponse:0x0055dd48300658 #net_http_res=#<Net::HTTPForbidden 403 Forbidden readbody=true>, #args={:method=>"GET", :url=>"https://jb-chef-cookbook.s3.amazonaws.com/cookbooks+3.zip", :raw_response=>true}, #file=#<Tempfile:/tmp/rest-client20151207-2439-1y4w8iq>, #code=403>
================================================================================
Error executing action `create` on resource 's3_file[/tmp/opsworks20151207-2439-1lgn8x6/archive]'
================================================================================
RestClient::Forbidden
---------------------
403 Forbidden
Cookbook Trace:
---------------
/var/lib/aws/opsworks/cache.stage1/cookbooks/s3_file/libraries/s3_file.rb:101:in `block in do_request'
/var/lib/aws/opsworks/cache.stage1/cookbooks/s3_file/libraries/s3_file.rb:83:in `rescue in with_region_detect'
/var/lib/aws/opsworks/cache.stage1/cookbooks/s3_file/libraries/s3_file.rb:78:in `with_region_detect'
/var/lib/aws/opsworks/cache.stage1/cookbooks/s3_file/libraries/s3_file.rb:92:in `do_request'
/var/lib/aws/opsworks/cache.stage1/cookbooks/s3_file/libraries/s3_file.rb:124:in `block in get_from_s3'
/var/lib/aws/opsworks/cache.stage1/cookbooks/s3_file/libraries/s3_file.rb:122:in `each'
/var/lib/aws/opsworks/cache.stage1/cookbooks/s3_file/libraries/s3_file.rb:122:in `get_from_s3'
/var/lib/aws/opsworks/cache.stage1/cookbooks/s3_file/providers/default.rb:65:in `block in class_from_file'
Resource Declaration:
---------------------
# In /var/lib/aws/opsworks/cache.stage1/cookbooks/scm_helper/libraries/s3.rb
38: s3_file "#{tmpdir}/archive" do
39: bucket s3_bucket
40: remote_path s3_key
41: aws_access_key_id scm_options[:user]
42: aws_secret_access_key scm_options[:password]
43: owner "root"
44: group "root"
45: mode "0600"
46: # per default it's host-style addressing
47: # but older versions of rest-client doesn't support host-style addressing with `_` in bucket name
48: s3_url "https://s3.amazonaws.com/#{s3_bucket}" if s3_bucket.include?("_")
49: action :create
50: end
51:
52: execute 'extract files' do
53: command "#{node[:opsworks_agent][:current_dir]}/bin/extract #{tmpdir}/archive"
54: end
55:
56: execute 'create git repository' do
57: cwd "#{tmpdir}/archive.d"
58: command "find . -type d -name .git -exec rm -rf {} \\;; find . -type f -name .gitignore -exec rm -f {} \\;; git init; git add .; git config user.name 'AWS OpsWorks'; git config user.email 'root#localhost'; git commit -m 'Create temporary repository from downloaded contents.'"
59: end
60:
61: "#{tmpdir}/archive.d"
62: end
63: end
64: end
Compiled Resource:
------------------
# Declared in /var/lib/aws/opsworks/cache.stage1/cookbooks/scm_helper/libraries/s3.rb:38:in `prepare_s3_checkouts'
s3_file("/tmp/opsworks20151207-2439-1lgn8x6/archive") do
action [:create]
retries 0
retry_delay 2
cookbook_name "opsworks_custom_cookbooks"
recipe_name "checkout"
bucket "jb-chef-cookbook"
remote_path "cookbooks+3.zip"
owner "root"
group "root"
mode "0600"
path "/tmp/opsworks20151207-2439-1lgn8x6/archive"
end
[2015-12-07T10:30:19+00:00] INFO: Running queued delayed notifications before re-raising exception
[2015-12-07T10:30:19+00:00] ERROR: Running exception handlers
[2015-12-07T10:30:19+00:00] ERROR: Exception handlers complete
[2015-12-07T10:30:19+00:00] FATAL: Stacktrace dumped to /var/lib/aws/opsworks/cache.stage1/chef-stacktrace.out
[2015-12-07T10:30:19+00:00] ERROR: 403 Forbidden
[2015-12-07T10:30:19+00:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)
What is missing ?! any advices ? thanks
The S3 file you are trying to access doesn't have permissions set up correctly.
Thanks it was my ACLs. I fixed it and now i had different problem !
[2015-12-08T13:07:18+00:00] INFO: HTTP Request Returned 404 Not Found: Object not found: /reports/nodes/c2.localdomain/runs
[2015-12-08T13:07:18+00:00] INFO: HTTP Request Returned 412 Precondition Failed: No such cookbook: docker
Missing Cookbooks:
No such cookbook: docker
[2015-12-08T13:07:18+00:00] ERROR: Running exception handlers
[2015-12-08T13:07:18+00:00] ERROR: Exception handlers complete
[2015-12-08T13:07:18+00:00] FATAL: Stacktrace dumped to /var/lib/aws/opsworks/cache.stage2/chef-stacktrace.out
[2015-12-08T13:07:18+00:00] ERROR: 412 "Precondition Failed"
[2015-12-08T13:07:18+00:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)

Cloud Init Fails to Install Packages on CentOS 7 in EC2

I have a cloud config file defined on my EC2 instance in the user data. Many of the parts/actions are run by the server properly, but package installation seems to fail invariably:
#cloud-config
package-update: true
package-upgrade: true
# ...
packages:
- puppet3
- lvm2
- btrfs-progs
# ...
I see the following in the logs:
May 20 20:23:39 cloud-init[1252]: util.py[DEBUG]: Package update failed
Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/cloudinit/config/cc_package_update_upgrade_install.py", line 74, in handle
cloud.distro.update_package_sources()
File "/usr/lib/python2.6/site-packages/cloudinit/distros/rhel.py", line 278, in update_package_sources
["makecache"], freq=PER_INSTANCE)
File "/usr/lib/python2.6/site-packages/cloudinit/helpers.py", line 197, in run
results = functor(*args)
File "/usr/lib/python2.6/site-packages/cloudinit/distros/rhel.py", line 274, in package_command
util.subp(cmd, capture=False, pipe_cat=True, close_stdin=True)
File "/usr/lib/python2.6/site-packages/cloudinit/util.py", line 1529, in subp
cmd=args)
ProcessExecutionError: Unexpected error while running command.
Command: ['yum', '-t', '-y', 'makecache']
Exit code: 1
Reason: -
Stdout: ''
Stderr: ''
May 20 20:23:39 cloud-init[1252]: amazon.py[DEBUG]: Upgrade level: security
May 20 20:23:39 cloud-init[1252]: util.py[DEBUG]: Running command ['yum', '-t', '-y', '--exclude=kernel', '--exclude=nvidia*', '--exclude=cudatoolkit', '--security', '--sec-severity=critical', '--sec-severity=important', 'upgrade'] with allowed return codes [0] (shell=False, capture=False)
May 20 20:23:52 cloud-init[1252]: util.py[WARNING]: Package upgrade failed
May 20 20:23:52 cloud-init[1252]: util.py[DEBUG]: Package upgrade failed
Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/cloudinit/config/cc_package_update_upgrade_install.py", line 81, in handle
cloud.distro.upgrade_packages(upgrade_level, upgrade_exclude)
File "/usr/lib/python2.6/site-packages/cloudinit/distros/amazon.py", line 50, in upgrade_packages
return self.package_command('upgrade', args=args)
File "/usr/lib/python2.6/site-packages/cloudinit/distros/rhel.py", line 274, in package_command
util.subp(cmd, capture=False, pipe_cat=True, close_stdin=True)
File "/usr/lib/python2.6/site-packages/cloudinit/util.py", line 1529, in subp
cmd=args)
ProcessExecutionError: Unexpected error while running command.
Command: ['yum', '-t', '-y', '--exclude=kernel', '--exclude=nvidia*', '--exclude=cudatoolkit', '--security', '--sec-severity=critical', '--sec-severity=important', 'upgrade']
Exit code: 1
Reason: -
Stdout: ''
Stderr: ''
May 20 20:23:52 cloud-init[1252]: util.py[DEBUG]: Running command ['yum', '-t', '-y', 'install', 'puppet3', 'lvm2', 'btrfs-progs'] with allowed return codes [0] (shell=False, capture=False)
May 20 20:24:03 cloud-init[1252]: util.py[WARNING]: Failed to install packages: ['puppet3', 'lvm2', 'btrfs-progs']
May 20 20:24:03 cloud-init[1252]: util.py[DEBUG]: Failed to install packages: ['puppet3', 'lvm2', 'btrfs-progs']
Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/cloudinit/config/cc_package_update_upgrade_install.py", line 88, in handle
cloud.distro.install_packages(pkglist)
File "/usr/lib/python2.6/site-packages/cloudinit/distros/rhel.py", line 70, in install_packages
self.package_command('install', pkgs=pkglist)
File "/usr/lib/python2.6/site-packages/cloudinit/distros/rhel.py", line 274, in package_command
util.subp(cmd, capture=False, pipe_cat=True, close_stdin=True)
File "/usr/lib/python2.6/site-packages/cloudinit/util.py", line 1529, in subp
cmd=args)
ProcessExecutionError: Unexpected error while running command.
Command: ['yum', '-t', '-y', 'install', 'puppet3', 'lvm2', 'btrfs-progs']
Exit code: 1
Reason: -
Stdout: ''
Stderr: ''
May 20 20:24:03 cloud-init[1252]: cc_package_update_upgrade_install.py[WARNING]: 3 failed with exceptions, re-raising the last one
May 20 20:24:03 cloud-init[1252]: util.py[WARNING]: Running package-update-upgrade-install (<module 'cloudinit.config.cc_package_update_upgrade_install' from '/usr/lib/python2.6/site-packages/cloudinit/config/cc_package_update_upgrade_install.pyc'>) failed
May 20 20:24:03 cloud-init[1252]: util.py[DEBUG]: Running package-update-upgrade-install (<module 'cloudinit.config.cc_package_update_upgrade_install' from '/usr/lib/python2.6/site-packages/cloudinit/config/cc_package_update_upgrade_install.pyc'>) failed
Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/cloudinit/stages.py", line 553, in _run_modules
cc.run(run_name, mod.handle, func_args, freq=freq)
File "/usr/lib/python2.6/site-packages/cloudinit/cloud.py", line 63, in run
return self._runners.run(name, functor, args, freq, clear_on_fail)
File "/usr/lib/python2.6/site-packages/cloudinit/helpers.py", line 197, in run
results = functor(*args)
File "/usr/lib/python2.6/site-packages/cloudinit/config/cc_package_update_upgrade_install.py", line 111, in handle
raise errors[-1]
ProcessExecutionError: Unexpected error while running command.
Command: ['yum', '-t', '-y', 'install', 'puppet3', 'lvm2', 'btrfs-progs']
Exit code: 1
Reason: -
Stdout: ''
Stderr: ''
When I actually run the command yum -t -y install puppet3 lvm2 btrfs-progs as root, it just runs fine, but Cloud Init is failing to run it on its own.
Is there something I'm doing wrong here?
Evidently there was a bug fixed with later versions of the image. If this is happening to you, it may be a legitimate bug in Cloud-Init or your servers implementation of it. A package upgrade may fix the problem, also an update to the image will fix it globally.

Chef Error executing action `start` on resource 'service[httpd]'

Here is my very basic recipes/default.rb file;
package "httpd" do
action :install
end
node["apache"]["sites"].each do |sitename, data|
document_root = "/content/sites/#{sitename}"
directory document_root do
mode "0755"
recursive true
end
template "/etc/httpd/conf.d/#{sitename}.conf" do
source "vhost.erb"
mode "0644"
variables(
:document_root => document_root,
:port => data["port"],
:domain => data["domain"]
)
notifies :restart, "service[httpd]"
end
end
service "httpd" do
action [:enable, :start]
end
When I run the chef-client in the node it returns the following error:
Error executing action `start` on resource 'service[httpd]'
================================================================================
Mixlib::ShellOut::ShellCommandFailed
------------------------------------
Expected process to exit with [0], but received '1'
---- Begin output of /sbin/service httpd start ----
STDOUT: Starting httpd: [FAILED]
STDERR: Syntax error on line 15 of /etc/httpd/conf.d/stedelahunty2.conf:
order takes one argument, 'allow,deny', 'deny,allow', or 'mutual-failure'
---- End output of /sbin/service httpd start ----
Ran /sbin/service httpd start returned 1
Resource Declaration:
---------------------
# In /var/chef/cache/cookbooks/apache/recipes/default.rb
35: service "httpd" do
36: action [:enable, :start]
37: end
Compiled Resource:
------------------
# Declared in /var/chef/cache/cookbooks/apache/recipes/default.rb:35:in `from_file'
service("httpd") do
action [:enable, :start]
supports {:restart=>false, :reload=>false, :status=>true}
retries 0
retry_delay 2
default_guard_interpreter :default
service_name "httpd"
enabled true
pattern "httpd"
declared_type :service
cookbook_name "apache"
recipe_name "default"
end
I've tried renaming it apache, changing the options to ':restart', commenting out entirely but that means httpd fails to start. I just need a simple way to restart the service after the chef run has completed.
Again, apologies for the novice question; I'm very new to coding.
Cheers
That's not a chef problem. Apache httpd reports
Syntax error on line 15 of /etc/httpd/conf.d/stedelahunty2.conf: order takes one argument, 'allow,deny', 'deny,allow', or 'mutual-failure'

Resources