Main
define hello::world {
file { "/tmp/helloworld${name}": }
}
Test
require 'spec_helper'
describe 'hello::world' do
let(:title) { '0' }
context 'test' do
let(:title) { '0' }
it do
should contain_file("/tmp/helloworld0")
end
end
end
at_exit { RSpec::Puppet::Coverage.report! }
Outcome
[user#host] sudo rspec
.
Finished in 0.26947 seconds
1 example, 0 failures
Total resources: 2
Touched resources: 1
Resource coverage: 50.00%
Untouched resources:
hello::world[0]
Multiple sources i.e. 1, 2 and 3 have been read and
it { should contain_define('hello::world[0]') }
or
it { should contain_class('hello::world[0]') }
were added, but the issue persists.
Question
How to touch defines using rspec-puppet?
According to the documentation, you should construct the correct matcher by replacing the colons in the resource type by underscores.
it { should contain_hello__world('0') }
Related
Hi all!
since we update Jenkins to V2.319 some pipelines Broke
I'm trying to run a job with eachFileRecurse()instead of traverse
I get this error:
Something went wrong: groovy.lang.MissingMethodException: No signature of method: java.io.File.eachFileRecurse() is applicable for argument types: (java.util.LinkedHashMap, groovy.io.FileType, WorkflowScript$_find_versions_closure1) values: [[nameFilter:app_ver.*.json$, excludeNameFilter:], ...]
Possible solutions: eachFileRecurse(groovy.io.FileType, groovy.lang.Closure), eachFileRecurse(groovy.lang.Closure)
This is my code
def find_versions(dirFile, fileNamefilter, excludeFileNameFiles) {
list = []
dirFile.traverse(type: FileType.FILES, nameFilter: fileNamefilter, excludeNameFilter: excludeFileNameFiles) {
//println it.name
def data= it.eachLine { line ->
// check if the line contains version
if(line.contains('"app_ver":')){
version = line.split(':')[1].split('-')[0].substring(1)
version = version.replace('"','')
version = version.replace(' ','')
println it.name+ " -> use " + version
if(version.startsWith("v")){
list.add(version)
}
}
}
}
return list.unique()
}
Can anyone help?
I am working on setting up automated build and deploy jobs in jenkins
Right now I have a single stage with parallel tasks set up like this
stage ('Testing & documenting'){
steps {
parallel (
"PHPLOC" : {
echo "Running phploc"
sh "./src/vendor/phploc/phploc/phploc --exclude=./src/vendor --no-interaction --quiet --log-csv=./build/logs/loc.csv src tests"
},
"SLOC": {
echo "Running sloc"
sh "sloccount --duplicates --wide --details . > ./build/logs/sloccount.sc 2>/dev/null"
},
"CPD" : {
echo "Running copy-paste detection"
sh "./src/vendor/sebastian/phpcpd/phpcpd --fuzzy . --exclude src/vendor --log-pmd ./build/logs/phpcpd.xml || true"
},
"MD" : {
echo "Running mess detection on code"
sh "./src/vendor/phpmd/phpmd/src/bin/phpmd src xml phpmd_ruleset.xml --reportfile ./build/logs/phpmd_code.xml --exclude vendor,build --ignore-violations-on-exit --suffixes php"
},
"PHPUNIT" : {
echo "Running PHPUnit w/o code coverage"
sh "./src/vendor/phpunit/phpunit/phpunit --configuration phpunit-quick.xml"
}
)
}
}
after reading https://jenkins.io/blog/2018/07/02/whats-new-declarative-piepline-13x-sequential-stages/ I noticed that they use a different structure.
stage("Documenting and Testing") {
parallel {
stage("Documenting") {
agent any
stages {
stage("CPD") {
steps {
//CPD
}
}
stage("PMD") {
steps {
//PMD stuff
}
}
}
stage("Testing") {
agent any
stages {
stage("PHPUnit") {
steps {
//PHPUnit
}
}
}
}
I am not sure what the difference between these two approaches is
The first example running parallel inside the steps block was introduced by the earlier versions of the declarative pipeline. This had some shortcomings. For example, to run each parallel branch on a different agent, you need to use a node step, and if you do that, the output of the parallel branch won’t be available for post directives (at a stage or pipeline level). Basically the old parallel step required you to use Scripted Pipeline within a Declarative Pipeline.
The second example is a true declarative syntax introduced to overcome the shortcomings of the former. In addition, this particular example runs two serial stages within the parallel stage ‘Documenting’.
You can read this official blog to know more about the parallel directive https://jenkins.io/blog/2017/09/25/declarative-1/.
This is an Inspec control that checks for the VPC-Id, Ports, Subnets and AZs for a specific Network Loadbalancer:
control 'Loadbalancer Config' do
title 'Checks for correct configuration of LBs'
describe aws_elbs.where(arn: 'arn:aws:elasticloadbalancing:eu-central-1:123456789:loadbalancer/app/web-app-alb/1d234567890d') do
its('vpc_ids') { should include 'vpc-a12345678' }
its('subnet_ids') { should include 'subnet-12345678' }
its('internal_ports') { should include 443 }
its('availability_zones') { should include 'eu-central-1a' }
end
end
When executing, the tests fail and I get
expected [] to include "vpc-a12345678"
expected [] to include 443
expected [] to include "subnet-12345678"
expected [] to include "eu-central-1a"
I double checked the ARN of the Loadbalancer but I always get this empty array of results.
I am now pretty sure that this happens because Inspec does not support Network Load Balancers.
Will leave this here in case someone has the same issue.
I've being using Ginkgo for a while and I have found a behavior I don't really understand. I have a set of specs that I only want to run if and only if a condition is available. If the condition is not available I want to skip the test suite.
Something like this:
ginkgo.BeforeSuite(func(){
if !CheckCondition() {
ginkgo.Skip("condition not available")
}
}
When the suite is skipped this counts as a failure.
FAIL! -- 0 Passed | 1 Failed | 0 Pending | 0 Skipped
I assumed there should be one tests to be considered to be skipped. Am I missing something? Any comments are welcome.
Thnaks
I think you are using Skip method incorrectly. It should be use inside spec like below, not inside BeforeSuite. When used inside spec it does show up as "skipped" in the summary.
It("should do something, if it can", func() {
if !someCondition {
Skip("special condition wasn't met")
}
})
https://onsi.github.io/ginkgo/#the-spec-runner
I have written the following validation Rule:
#Check
def checkDeclarationIsNotReferenceToItself(Declaration dec) {
if(dec.decCon.singleContent.reference == null && !dec.decCon.nextCon.isEmpty) {
//only proceed if it is a reference
return
}
var name = dec.name
if(dec.decCon.singleContent.reference.name == name) {
//only if the declaration is a self-reference without further actions
var warningMsg = "The declaration '" + name + "' is a reference to itself"
warning(warningMsg,
SQFPackage.eINSTANCE.declaration_DecCon,
SELFREFERENCE)
}
}
And then I have written an test case for it looking as following:
#Test
def void checkDeclarationIsNotReferenceToItselfTest() {
'''
test = 3;
test = test;
'''.parse.assertWarning(SQFPackage.eINSTANCE.decContent,
SQFValidator.SELFREFERENCE,
"The declaration 'test' is a reference to itself")
}
But when I run JUnit it reports an error:
Expected WARNING 'raven.sqf.SelfReference' on DecContent at [-1:-1] but got
WARNING (raven.sqf.SelfReference) 'The declaration 'test' is a reference to itself' on Declaration, offset 18, length 4
I don't understand this because it actually expects exactly that error message (As far as I see it)
Anyone has an idea why it doesn't work?
Greetings Krzmbrzl
looks like the way you create the warning and test the validation do not match together
warning(warningMsg,
SQFPackage.eINSTANCE.declaration_DecCon,
SELFREFERENCE)
creates the warning on a Declaration
.assertWarning(SQFPackage.eINSTANCE.decContent,
SQFValidator.SELFREFERENCE,
"The declaration 'test' is a reference to itself")
tests for a DecContent