dependency error while running windowsfeature - windows

This is how my manifest looks like:
class dotNetCore
{
notify { 'Installing NET-Framework-Core': }
windowsfeature { 'NET-Framework-Core': }
notify { 'Finished Installing NET-Framework-Core': }
}
class installIIS{
require dotNetCore
notify { 'Installing IIS': }
windowsfeature { 'IIS':
feature_name => [
'Web-Server',
'Web-WebServer',
'Web-Asp-Net45',
'Web-ISAPI-Ext',
'Web-ISAPI-Filter',
'NET-Framework-45-ASPNET',
'WAS-NET-Environment',
'Web-Http-Redirect',
'Web-Filtering',
'Web-Mgmt-Console',
'Web-Mgmt-Tools'
]
}
notify { 'Finished Installing IIS': }
}
class serviceW3SVC {
require installIIS
notify { 'Setting serviceW3SVC': }
service { 'W3SVC':
ensure => 'running',
enable => 'true',
}
notify { 'Finished Setting serviceW3SVC': }
}
class stopDefaultWebsite
{
require serviceW3SVC
notify { 'Stopping Default Web Site': }
iis::manage_site_state { 'Default Web Site':
ensure => 'stopped',
site_name => 'Default Web Site'
}
notify { 'Finished Stopping Default Web Site': }
}
class includecoreandiis
{
contain dotNetCore
contain installIIS
contain serviceW3SVC
contain stopDefaultWebsite
}
On the agent node, i am getting dependency error in the event viewer:
Failed to apply catalog: Parameter provider failed on Exec[add-feature-NET-Framework-Core]: Invalid exec provider 'powershell' at /etc/puppetlabs/puppet/environments/production/modules/windowsfeature/manifests/init.pp:111
Wrapped exception:
Invalid exec provider 'powershell'
After restarting the Puppet Agent service on the client node couple of times, it fetches the rest of the files and it works.
How do i make it wait for all the required files to be downloaded before installing the mentioned windows features?

You need to install the PowerShell provider module.
Additionally you may want to look at the windowsfeature module and anything that is available in the puppetlabs/windows module pack.
Sounds like a plugin sync issue after rereading, plus waiting for a suitable provider.

Related

How can I use vSphere Cloud Plugin inside Jenkins pipeline code?

I have a setup at work where the vSphere host are manually restarted before execution of a specific jenkins job, as a noob in the office I automated this process by adding a extra build step to restart vm's with the help of https://wiki.jenkins-ci.org/display/JENKINS/vSphere+Cloud+Plugin! (vSphere cloud plugin).
I would like to now integrate this as a pipeline code, please advise.
I have already checked that this plugin is Pipeline compatible.
I currently trigger the vSphere host restart in pipeline by making it to remotely trigger a job configured with vSphere cloud plugin.
pipeline {
agent any
stages {
stage('Restarting vSphere') {
steps {
script {
sh "curl -v 'http://someserver.com/job/Vivin/job/executor_configurator/buildWithParameters?Host=build-114&token=bonkers'"
}
}
}
stage('Setting Executors') {
steps {
script {
def jenkins = Jenkins.getInstance()
jenkins.getNodes().each {
if (it.displayName == 'brewery-133') {
echo 'brewery-133'
it.setNumExecutors(8)
}
}
}
}
}
}
}
I would like to integrate the vSphere cloud plugin directly in the Pipeline code itself, please help me to integrate.
pipeline {
agent any
stages {
stage('Restarting vSphere') {
steps {
vSphere cloud plugin code that is requested
}
}
}
stage('Setting Executors') {
steps {
script {
def jenkins = Jenkins.getInstance()
jenkins.getNodes().each {
if (it.displayName == 'brewery-133') {
echo 'brewery-133'
it.setNumExecutors(8)
}
}
}
}
}
}
}
Well I found the solution myself with the help of 'pipeline-syntax' feature found in the menu of a Jenkins pipeline job.
'Pipeline-syntax' feature page contains syntax of all the possible parameters made available via the API of the installed plugins of a Jenkins server, using which we can generate or develop the syntax based on our needs.
http://<jenkins server url>/job/<pipeline job name>/pipeline-syntax/
My Jenkinsfile (Pipeline) now look like this
pipeline {
agent any
stages {
stage('Restarting vSphere') {
steps {
vSphere buildStep: [$class: 'PowerOff', evenIfSuspended: false, ignoreIfNotExists: false, shutdownGracefully: true, vm: 'brewery-133'], serverName: 'vspherecentral'
vSphere buildStep: [$class: 'PowerOn', timeoutInSeconds: 180, vm: 'brewery-133'], serverName: 'vspherecentral'
}
}
stage('Setting Executors') {
steps {
script {
def jenkins = Jenkins.getInstance()
jenkins.getNodes().each {
if (it.displayName == 'brewery-133') {
echo 'brewery-133'
it.setNumExecutors(1)
}
}
}
}
}
}
}

Only run another PP file after one has completed

A chocolatey provider is required, to install packages this will work but only works once another pp file finishes executing.
The problem is that puppet evaluates both files under the node statement and errors on invalid provider; the problem is I run the first pp file by commenting the other out , then let it run & uncomment it then rerun with puppet agent --test it all works.
I have tried tags and used an if statement with the tag , but this doesn't seem to work either.
class windows::chocolatey {
exec { 'set_executionpolicy':
command => "set-executionpolicy unrestricted -force -scope process;
(iex((new-object
net.webclient).DownloadString('https://chocolatey.org/install.ps1')))>\$null
2>&1",
provider => 'powershell',
creates => 'C:/ProgramData/chocolatey',
}
node "web-iis-02" {
class { 'windows':} #chocolatey installing to allow atom.pp to work
class { 'atom': } # init.pp below install using chocolatey
#installs package
class atom {
if tagged(windows) {
include atom::pakages
notify { "Calling Pakagepp script": }
}
}
#if tagged init.pp above calls this:
class atom::pakages {
include chocolatey
package { 'Atom':
ensure => 'latest',
provider => 'chocolatey',
}
I get this from pakages.pp:
Error: Failed to apply catalog: Parameter provider failed on
Package[Atom]: Invalid package provider 'chocolatey' (file:
/etc/puppetlabs/code/environments/production/modules/atom/manifests/pakages.pp, line: 3)
Try adding a require dependency, so the atom class is declared after the windows class:
class { 'windows': }
class { 'atom':
require => Class['windows'],
}
or quick and dirty:
class { 'windows': }
-> class { 'atom': }
You'll need to remove that tagged condition as it isn't needed.
I can't quite tell from your question which classes depend on which, but I'm pretty sure it is require you need. You may need to add a require for the chocolatey class:
class { 'atom':
require => Class['windows', 'chocolatey'],
}

Is there a way to move the entire post {} build section in Jenkinsfile to the global pipeline library?

I'm relatively new to Jenkins pipelines, but having implemented already a few, I've realised I need to start using jenkins shared library before I go mad.
Have already figured out how to define some repetitive steps in the library and call them with less clutter from Jenkinsfile, but not sure if the same thing can be done for the entire post build section (thought I've read about to how to define the entire pipeline in the lib and similar), as this is pretty much static end of every single pipeline code:
#Library('jenkins-shared-library')_
pipeline {
agent none
stages {
stage ('System Info') { agent any
steps { printSysInfo() }
}
stage ('Init'){ agent {label 'WinZipSE'}
steps { init('SCMroot') }
}
stage('Build') { agent any
steps { doMagic() }
}
}
// This entire 'post {}' section needs to go to a shared lib
// and be called just with a simple methed call, e.g.
// doPostBuild()
post {
always {
node ('master') {
googlechatnotification (
message: '[$BUILD_STATUS] Build $JOB_NAME $BUILD_NUMBER has finished',
url: 'id:credential_id_for_Ubuntu')
step (
[$class: 'Mailer',
recipients: 'sysadmins#example.com me#example.com',
notifyEveryUnstableBuild: true,
sendToIndividuals: true]
)
}
}
success {
node ('master') {
echo 'This will run only if successful'
}
}
failure {
node ('master') {
echo 'This will run only if failed'
}
}
// and so on
}
}
I just dunno how to syntactically achieve that. For sure, I can define the entire post build section an a lib/var like: doPotBuild.groovy
def call () {
post {...}
}
but how I will eventually call it from within my Jenkinsfile outside of that defined post {} build block section (AKA stages).
I can call it within some stage('post build){doPostBuild()}, but it won't serve the way how the true post {} section is supposed to work, e.g. it won't get executed it there was failure in one of the previous stages.
Any thoughts on that and mainly working examples?
I am not entirely if this will work as I don't use declarative pipelines, so am unsure how rigid the top level structure is. But I would revert to a script block.
#Library('jenkins-shared-library')_
pipeline {
agent none
stages {
stage ('System Info') { agent any
steps { printSysInfo() }
}
stage ('Init'){ agent {label 'WinZipSE'}
steps { init('SCMroot') }
}
stage('Build') { agent any
steps { doMagic() }
}
}
// This entire 'post {}' section needs to go to a shared lib
// and be called just with a simple methed call, e.g.
// doPostBuild()
script {
doPostBuild()
}
}

Puppet Install specific version of package depending on application version

I have a module which installs my Application.
To install system packages i'm using virtual resource:
#package {[
'unzip',
'wget',
'htop',
'xorg-x11-server-Xvfb']:
ensure => installed,
}
define myapp1_packages {
realize(
Package['unzip'],
Package['fontconfig'],
Package['libfreetype.so.6'])
}
#myapp1_packages{ 'myapp1_packages': }
Then I use realize in my manifest to install the above packages:
realize(myapp1_packages['myapp1_packages'])
But for each version of my application I also need appropriate versions of system packages.
I need something like that:
if $app_version == '1.0' {
"install unzip-1xx"
"install fontconfig-1-xx"
"install libfreetype.so.6-1-x-xx"
elseif $app_version == '2.0'
"install unzip-2xx"
"install fontconfig-2-xx"
"install libfreetype.so.6-2-x-xx"
What is most elegant way to do this? And is it possible to keep virtual resources in that case? I'm looking to use ensure_packages but i worried about resource duplication. Thanks for the help!
The best thing to do here is to make $app_version a parameter for your module: https://docs.puppet.com/puppet/4.10/lang_classes.html#class-parameters-and-variables. Note an example from the documentation here: https://docs.puppet.com/puppet/4.10/lang_classes.html#appendix-smart-parameter-defaults.
For your situation, the class would look like:
myclass($app_version = 'default version') {
if $app_version == '1.0' {
#package { 'unzip': ensure => '1xx' }
#package { 'fontconfig': ensure => '1-xx' }
#package { 'libfreetype': ensure => '6-1-xx' }
}
elsif $app_version == '2.0' {
#package { 'unzip': ensure => '2xx' }
#package { 'fontconfig': ensure => '2-xx' }
#package { 'libfreetype': ensure => '6-2-xx' }
}
}
thus also allowing you to retain your virtual resources.
You can then pass parameters to this class by declaring it like:
class { 'myclass': app_version => '2.0' }
or using automatic data bindings with hieradata:
# puppet manifest
include myclass
# hieradata
myclass::app_version: 2.0
Your collector elsewhere will then realize the correct versions for your packages.

How to define jenkins build trigger in jenkinsfile to start build after other job

I would like to define a build trigger in my Jenkinsfile. I know how to do it for the BuildDiscarderProperty:
properties([[$class: 'jenkins.model.BuildDiscarderProperty', strategy: [$class: 'LogRotator', numToKeepStr: '50', artifactNumToKeepStr: '20']]])
How can I set the Build Trigger that starts the job, when another project has been built. I cannot find a suitable entry in the Java API docs.
Edit:
My solution is to use the following code:
stage('Build Agent'){
if (env.BRANCH_NAME == 'develop') {
try {
// try to start subsequent job, but don't wait for it to finish
build job: '../Agent/develop', wait: false
} catch(Exception ex) {
echo "An error occurred while building the agent."
}
}
if (env.BRANCH_NAME == 'master') {
// start subsequent job and wait for it to finish
build '../Agent/master', wait: true
}
}
I just looked for the same thing and found this Jenkinsfilein jenkins-infra/jenkins.io
In short:
properties([
pipelineTriggers([cron('H/30 * * * *')])
])
This is an example:
#Project test1
pipeline {
agent {
any
}
stages {
stage('hello') {
steps {
container('dind') {
sh """
echo "Hello world!"
"""
}
}
}
}
post {
success{
build propagate: false, job: 'test2'
}
}
}
post {} will be execute when project test1 is built and the code inside
success {} will only be executed when project test1 is successful.
build propagate: false, job: 'test2' will call project test2.
propogate: false ensures that project test1 does not wait for project test2's
completion and simply invokes it.

Resources