rspec/serverspec service test always fails - ruby

I believe this issue probably a duplicate of serverspec service test returns incorrect failure, but I include a bit more information of my execution environment.
I have a bunch of successful serverspec tests executing against a RHEL6 VM on AWS.
However any "service" test seems to fail with the matchers be_enabled and be_running.
I have the following in my spec_helper.rb:
set :os, :family => 'redhat', :release => '6', :arch => 'x86_64'
I tried both serverspec and rspec syntax for the tests and both fail as they run the same commands:
describe service('ntpd') do
it { should be_enabled }
it { should be_running }
end
it "is running ntpd" do
expect(service("ntpd")).to be_enabled
expect(service("ntpd")).to be_running
end
Failure/Error: it { should be_enabled }
expected Service "ntpd" to be enabled
sudo -p 'Password: ' /bin/sh -c chkconfig\ --list\ ntpd\ \|\ grep\ 3:on
Failure/Error: it { should be_running }
expected Service "ntpd" to be running
sudo -p 'Password: ' /bin/sh -c service\ ntpd\ status
However, running them locally on the server succeeds:
$ sudo -p 'Password: ' /bin/sh -c chkconfig\ --list\ ntpd\ \|\ grep\ 3:on
ntpd 0:off 1:off 2:on 3:on 4:on 5:on 6:off
$ echo $?
0
$ sudo -p 'Password: ' /bin/sh -c service\ ntpd\ status
ntpd (pid 1101) is running...
$ echo $?
0
I tried looking into setting up some debugging with pry-byebug but that looks not-so-straightforward, so I kind of gave up on that for now.
I'm running ruby 2.0, serverspec 2.24, rspec 3.3
Can anyone help point me in the right direction?

I needed to specify the runlevel to check, and then things worked. I presume this is some backwards compatibility issue between RHEL6/7 and systemV/systemD as the documentation indicates that the tests above should work.
describe service('ntpd') do
it { should be_enabled.with_level(2) }
it { should be_enabled.with_level(3) }
it { should be_enabled.with_level(4) }
it { should be_enabled.with_level(5) }
it { should be_running }
end

if the with level solution doesn't help, I also found that you need to set the PATH variable in the spec_helper.rb file to include /sbin and /usr/sbin. That did the trick for me personally.

Related

Jenkins pipeline - How to read the success status of build?

Below is the output after running the build(with success):
$ sam build
2019-06-02 15:36:37 Building resource 'SomeFunction'
2019-06-02 15:36:37 Running PythonPipBuilder:ResolveDependencies
2019-06-02 15:36:39 Running PythonPipBuilder:CopySource
Build Succeeded
Built Artifacts : .aws-sam/build
Built Template : .aws-sam/build/template.yaml
Commands you can use next
=========================
[*] Invoke Function: sam local invoke
[*] Package: sam package --s3-bucket <yourbucket>
[command] && echo "Yes" approach did not help me.
I tried to use this in Jenkins pipeline
def samAppBuildStatus = sh(script: '[cd sam-app-folder; sam build | grep 'Succeeded' ] && echo true', returnStatus: true) as Boolean
as one-liner script command, but does not work
How to grab the success build status using bash script? for Jenkins pipeline
Use this to grab the exit status of the command:
def samAppBuildStatus = sh returnStatus: true, script: 'cd sam-app-folder; sam build | grep "Succeeded"'
or this if you don't want to see any stderr in the output:
def samAppBuildStatus = sh returnStatus: true, script: 'cd sam-app-folder; sam build 2>&1 | grep "Succeeded"'
then later in your Jenkinsfile you can do something like this:
if (!samAppBuildStatus){
echo "build success [$samAppBuildStatus]"
} else {
echo "build failed [$samAppBuildStatus]"
}
The reason for the ! is because the definitions of true and false between shell and groovy differ (0 is true for shell).

Include file conditionally based on a bash script

I have a bash command that will return either 1 or 0. I want to run said command from puppet:
exec { 'Check if Thinkpad':
command => 'sudo dmidecode | grep -q ThinkPad && echo 1 || echo 0',
path => '/usr/bin/:/bin/bash/',
environment => "HOME=/root"
}
Is there a way I can include a file using puppet only if my command returned 1?
file { '/etc/i3/config':
source => 'puppet:///modules/i3/thinkpad',
owner => 'root',
group => 'root',
mode => '0644',
}
You can use an external fact to use the bash script as is. Inside the module's facts.d directory, you could place the script.
#!/bin/bash
if [ dmidecode | grep -q ThinkPad ]
echo 'is_thinkpad=true'
else
echo 'is_thinkpad=false'
fi
You can also use a custom fact inside the lib/facter directory of your module.
Facter.add(:is_thinkpad) do
confine kernel: linux
setcode do
`dmidecode | grep -q ThinkPad && echo true || echo false`
end
end
In both cases, the fact name of is_thinkpad follows the convention for the nomenclature of boolean facts for types of systems. You can then update the code in your manifest for this boolean.
if $facts['is_thinkpad'] == true {
file { '/etc/i3/config':
source => 'puppet:///modules/i3/thinkpad',
owner => 'root',
group => 'root',
mode => '0644',
}
}
This will provide you with the functionality you desire.
https://docs.puppet.com/facter/3.6/custom_facts.html#adding-custom-facts-to-facter
https://docs.puppet.com/facter/3.6/custom_facts.html#external-facts
You will probably need to turn your bash script into a "custom fact" -- which is something I've only done once and don't fully understand enough to teach you how.
I want to say that the easiest way to set up a custom fact is to put your script into /etc/facter/facts.d/ on the agent machine, and make sure it ends with a line that says
echo "thinkpadcheck=1"
or
echo "thinkpadcheck=0"
You can test it with (note: you must be root)
sudo facter -p | grep think
and it should return
thinkpadcheck => 1
But once you have done that, then your puppet script can say
if $thinkpadcheck == 1
{
file { '/etc/i3/config':
source => 'puppet:///modules/i3/thinkpad',
owner => 'root',
group => 'root',
mode => '0644',
}
}
else
{
notify { "thinkpadcheck failed for $hostname" : }
}
I'd like to share another method I found in the Puppet Cookbook 3rd edition (page 118):
message.rb
#!/usr/bin/env ruby
puts "This runs on the master if you are centralized"
Make your script executable with:
sudo chmod +x /usr/local/bin/message.rb
message.pp
$message = generate('/usr/local/bin/message.rb')
notify { $message: }
Then run:
puppet apply message.pp
This example uses a ruby script but any type of script including a basic shell script, as was needed in my case, can be used to set a variable in puppet.

Puppet: Exec from class when Exec from another class is successful

I want to call an Exec only when another Exec from a different class is executed successfully.
class mysql {
exec { 'load-sql':
command => 'mysql -uadmi -pxxx general < /vagrant/sites/ddbb/general.sql',
path => ['/bin', '/usr/bin'],
timeout => 0,
onlyif => "test -f /vagrant/sites/ddbb/general.sql",
}
exec { 'delete-general-sql':
command => 'sudo rm /vagrant/sites/ddbb/general.sql',
path => ['/bin', '/usr/bin'],
onlyif => "test -f /vagrant/sites/ddbb/general.sql",
require => Exec['load-sql'],
}
}
class sphinx {
exec { 'sphinx-create-all-index':
command => 'sudo indexer -c /etc/sphinxsearch/sphinx.conf --all --rotate',
require => Exec['load-sql'],
path => '/usr/bin/';
}
}
The command 'delete-general-sql' is executed only if 'load-sql' is executed successfully but 'sphinx-create-all-index'ignores the result of 'load-sql'...
Thanks in advance!
You mess up with require and onlyif.
Read about puppet ordering.
require
Causes a resource to be applied after the target resource.
so
require => Exec['load-sql'],
means, execute resource after execution of exec{'load-sql':} resource.
On the other hand onlyif in exec means:
If this parameter is set, then this exec will only run if the command has an exit code of 0.
So you must add onlyif with proper test (probably onlyif => "test -f /vagrant/sites/ddbb/general.sql) to 'sphinx-create-all-index'.
To make the dependent exec run only once the previous one did run, you can use subscribe and refreshonly.
exec { 'sphinx-create-all-index':
command => 'sudo indexer -c /etc/sphinxsearch/sphinx.conf --all --rotate',
subscribe => Exec['load-sql'],
refreshonly => true,
path => '/usr/bin/';
}
This has some caveats - you may have a hard time to get Puppet to execute this task again if something goes wrong the first time around.

Expect fails but I don't see why

I have a bash script that gets info from Heroku so that I can pull a copy of my database. That script works fine in cygwin. But to run it in cron it halts because the shell that it uses halts as Heroku's authentication through Heroku Toolbelt.
Here is my crontab:
SHELL=/usr/bin/bash
5 8-18 * * 1-5 /cygdrive/c/Users/sam/work/push_db.sh >>/cygdrive/c/Users/sam/work/output.txt
I have read the Googles and the man page within cygwin to come up with this addition:
#!/usr/bin/bash
. /home/sam.walton/.profile
echo $SHELL
curl -H "Accept: application/vnd.heroku+json; version=3" -n https://api.heroku.com/
#. $HOME/.bash_profile
echo `heroku.bat pgbackups:capture --expire`
#spawn heroku.bat pgbackups:capture --expire
expect {
"Email:" { send -- "$($HEROKU_LOGIN)\r"}
"Password (typing will be hidden):" { send -- "$HEROKU_PW\r" }
timeout { echo "timed out during login"; exit 1 }
}
sleep 2
echo "first"
curl -o latest.dump -L "$(heroku.bat pgbackups:url | dos2unix)"
Here's the output from the output.txt
/usr/bin/bash
{
"links":[
{
"rel":"schema",
"href":"https://api.heroku.com/schema"
}
]
}
Enter your Heroku credentials. Email: Password (typing will be hidden): Authentication failed. Enter your Heroku credentials. Email: Password (typing will be hidden): Authentication failed. Enter your Heroku credentials. Email: Password (typing will be hidden): Authentication failed.
As you can see it appears that the output is not getting the result of the send command as it appears it's waiting. I've done so many experiments with the credentials and the expect statements. All stop here. I've seen few examples and attempted to try those out but I'm getting fuzzy eyed which is why I'm posting here. What am I not understanding?
Thanks to comments, I'm reminded to explicitly place my env variables in .bashrc:
[[ -s $USERPROFILE/.pik/.pikrc ]] && source "$USERPROFILE/.pik/.pikrc"
export HEROKU_LOGIN=myEmailHere
export HEROKU_PW=myPWhere
My revised script per #Dinesh's excellent example is below:
. /home/sam.walton/.bashrc echo $SHELL echo $HEROKU_LOGIN curl -H "Accept: application/vnd.heroku+json; version=3" -n https://api.heroku.com/
expect -d -c " spawn heroku.bat pgbackups:capture --expire --app gw-inspector expect {
"Email:" { send -- "myEmailHere\r"; exp_continue}
"Password (typing will be hidden):" { send -- "myPWhere\r" }
timeout { puts "timed out during login"; exit 1 } } " sleep 2 echo "first"
This should work but while the echo of the variable fails, giving me a clue that the variable is not being called, I am testing hardcoding the variables directly to eliminate that as a variable. But as you can see by my output not only is the echo yielding nothing, there is no clue that any diagnostics are being passed which makes me wonder if the script is even being called to run from expect, as well as the result of the spawn command. To restate, the heroku.bat command works outside the expect closure but the results are above. The result of the command directly above is:
/usr/bin/bash
{
"links":[
{
"rel":"schema",
"href":"https://api.heroku.com/schema"
}
]
}
What am I doing wrong that will show me diagnostic notes?
If you are going to use the expect code inside your bash script, instead of calling it separately, then you should have use the -c flag option.
From your code, I assume that you have the environmental variables HEROKU_LOGIN and HEROKU_PW declared in the bashrc file.
#!/usr/bin/bash
#Your code here
expect -c "
spawn <your-executable-process-here>
expect {
# HEROKU_LOGIN & HEROKU_PW will be replaced with variable values.
"Email:" { send -- "$HEROKU_LOGIN\r";exp_continue}
"Password (typing will be hidden):" { send "$HEROKU_PW\r" }
timeout { puts"timed out during login"; exit 1 }
}
"
#Your further bash code here
You should not use echo command inside expect code. Use puts instead. The option of spawning the process inside expect code will be more robust than spawning it outside.
Notice the use of double quotes with the expect -c flag. If you use single quotes, then bash script won't do any form of substitution. So, if you need bash variable substitution, you should use double quotes for the expect with -c flag.
To know about usage of -c flag, have a look at here
If you still have any issue, you can debug by appending -d with the following way.
expect -d -c "
our code here
"

exec command unless directory exists in puppet

How to exec a command if directory does not exists in puppet file?
exec { "my_exec_task":
command => "tar zxf /home/user/tmp/test.tar.gz",
unless => "test -d /home/user/tmp/new_directory",
path => "/usr/local/bin/:/bin/",
}
I get error: "Could not evaluate: Could not find command 'test'". Also is this the best practice to check if directory does not exists?
test work for me at /usr/bin, so adding it to path could solve error.
unless => 'bash -c "test -d /home/user/tmp/new_directory"',
Should work too. But I think the correct way is to use creates:
exec { "my_exec_task":
command => "tar zxf /home/user/tmp/test.tar.gz",
creates => "/home/user/tmp/new_directory",
path => "/usr/local/bin/:/bin/",
}
Actual problem is in path:
path => [ '/usr/local/bin', '/sbin', '/bin', '/usr/sbin', '/usr/bin' ]

Resources