I would like to check if an rpm package installed on the server and the version of that.
send "rpm -qa | grep ^cman\r"
expect {
-re "(cman-.*)\r" { set cman $expect_out(0,string) }
default { set cman "no cman" }
}
It works correctly when cman installed, but waiting with timeout when cman isn't on the list.
How should I check the else thread ?
Try like this:
send "rpm -qa | grep ^cman || echo 'c'man_not_found\r"
expect {
-re "(cman-.*)\r" {
set cman $expect_out(1,string)
}
cman_not_found {
set cman "no cman"
}
}
Related
I want to deploy an Ubuntu VM on Azure and automatically execute a few lines of Bash code right after the VM is deployed. The Bash code is supposed to install PowerShell on the VM. To do this, I use this Bicep file. Below you can see an extract of that Bicep file where I specify what Bash code I want to be executed post deployment.
resource deploymentscript 'Microsoft.Compute/virtualMachines/runCommands#2022-08-01' = {
parent: virtualMachine
name: 'postDeploymentPSInstall'
location: location
properties: {
source: {
script: '''sudo apt-get update &&\
sudo apt-get install -y wget apt-transport-https software-properties-common &&\
wget -q "https://packages.microsoft.com/config/ubuntu/$(lsb_release -rs)/packages-microsoft-prod.deb" &&\
sudo dpkg -i packages-microsoft-prod.deb &&\
sudo apt-get update &&\
sudo apt-get install -y powershell &&\
pwsh'''
}
}
}
I searched for solutions on the web but only found conflicting explanations. I made the code above with the help of this tutorial. The only difference I see is that I'm using Bash and not PowerShell like the blog post author. Thanks for your help.
To deploy an Ubuntu VM on Azure and automatically execute a few lines of Bash code right after the VM is deployed:
I tried to create a Linux VM and used run command to install PowerShell inside the VM while deployment and was able to achieve the desired results by running below bicep file.
#description('Name of the Network Security Group')
param networkSecurityGroupName string = 'SecGroupNet'
var publicIPAddressName = '${vmName}PublicIP'
var networkInterfaceName = '${vmName}NetInt'
var osDiskType = 'Standard_LRS'
var subnetAddressPrefix = '10.1.0.0/24'
var addressPrefix = '10.1.0.0/16'
var linuxConfiguration = {
disablePasswordAuthentication: true
ssh: {
publicKeys: [
{
path: '/home/${adminUsername}/.ssh/authorized_keys'
keyData: adminPassword
}
]
}
}
resource nic 'Microsoft.Network/networkInterfaces#2021-05-01' = {
name: networkInterfaceName
location: location
properties: {
ipConfigurations: [
{
name: 'ipconfig1'
properties: {
subnet: {
id: subnet.id
}
privateIPAllocationMethod: 'Dynamic'
publicIPAddress: {
id: publicIP.id
}
}
}
]
networkSecurityGroup: {
id: nsg.id
}
}
}
resource nsg 'Microsoft.Network/networkSecurityGroups#2021-05-01' = {
name: networkSecurityGroupName
location: location
properties: {
securityRules: [
{
name: 'SSH'
properties: {
priority: 1000
protocol: 'Tcp'
access: 'Allow'
direction: 'Inbound'
sourceAddressPrefix: '*'
sourcePortRange: '*'
destinationAddressPrefix: '*'
destinationPortRange: '22'
}
}
]
}
}
resource vnet 'Microsoft.Network/virtualNetworks#2021-05-01' = {
name: virtualNetworkName
location: location
properties: {
addressSpace: {
addressPrefixes: [
addressPrefix
]
}
}
}
resource subnet 'Microsoft.Network/virtualNetworks/subnets#2021-05-01' = {
parent: vnet
name: subnetName
properties: {
addressPrefix: subnetAddressPrefix
privateEndpointNetworkPolicies: 'Enabled'
privateLinkServiceNetworkPolicies: 'Enabled'
}
}
resource publicIP 'Microsoft.Network/publicIPAddresses#2021-05-01' = {
name: publicIPAddressName
location: location
sku: {
name: 'Basic'
}
properties: {
publicIPAllocationMethod: 'Dynamic'
publicIPAddressVersion: 'IPv4'
dnsSettings: {
domainNameLabel: dnsLabelPrefix
}
idleTimeoutInMinutes: 4
}
}
resource vm 'Microsoft.Compute/virtualMachines#2021-11-01' = {
name: vmName
location: location
properties: {
hardwareProfile: {
vmSize: vmSize
}
storageProfile: {
osDisk: {
createOption: 'FromImage'
managedDisk: {
storageAccountType: osDiskType
}
}
imageReference: {
publisher: 'Canonical'
offer: 'UbuntuServer'
sku: ubuntuOSVersion
version: 'latest'
}
}
networkProfile: {
networkInterfaces: [
{
id: nic.id
}
]
}
osProfile: {
computerName: vmName
adminUsername: adminUsername
adminPassword: adminPassword
linuxConfiguration: ((authenticationType == 'password') ? null : linuxConfiguration)
}
}
}
resource deploymentscript 'Microsoft.Compute/virtualMachines/runCommands#2022-03-01' = {
parent: vm
name: 'linuxscript'
location: location
properties: {
source: {
script: '''# Update the list of packages
sudo apt-get update;
#Install pre-requisite packages.
sudo apt-get install -y wget apt-transport-https software-properties-common;
#Download the Microsoft repository GPG keys
wget -q "https://packages.microsoft.com/config/ubuntu/$(lsb_release -rs)/packages-microsoft-prod.deb";
#Register the Microsoft repository GPG keys
sudo dpkg -i packages-microsoft-prod.deb;
#Update the list of packages after we added packages.microsoft.com
sudo apt-get update;
#Install PowerShell
sudo apt-get install -y powershell;
#Start PowerShell
pwsh'''
}
}
}
output adminUsername string = adminUsername
output hostname string = publicIP.properties.dnsSettings.fqdn
output sshCommand string = 'ssh $ {adminUsername}#${publicIP.properties.dnsSettings.fqdn}'
Deployed Successfully:
From Azure Portal:
After the deployment, When I ssh’d into my VM and ran Pwsh to check if PowerShell was installed
Installed successfully:
Refer MSDoc, run command template-MSDoc
Your problem is misunderstanding what the && does.
The shell will attempt to run semi-simultaneously all parts, some possibly clobbering others or not having necessary preconditions in place before starting!
Replace all instances of "&&\" with ";\" and your script should work, meaning the commands will run sequentially, waiting for the previous line to complete before attempting the subsequent lines.
I'm currently trying to get behind an error that only occurs in CI, thrown by a script, called by Gradle Exec with Gradle 7.5.
The issue is, I can't see any error messages in the log as it seems they aren't picked up by Gradle.
For that reason, I've created a small Gradle Plugin located in buildSrc and an .sh script located under /scripts
import org.apache.tools.ant.taskdefs.condition.Os
import org.gradle.api.Plugin
import org.gradle.api.Project
import org.gradle.api.tasks.Exec
class ExecTestPlugin implements Plugin<Project> {
private boolean ENABLE_CUSTOM_OUTPUT = true;
#Override
void apply(Project target) {
target.tasks.register("testExec", Exec) {
group = "other"
workingDir(target.getProjectDir().getAbsolutePath() + File.separator + "scripts")
if (ENABLE_CUSTOM_OUTPUT) {
standardOutput = new ByteArrayOutputStream()
errorOutput = new ByteArrayOutputStream()
doLast {
def result = standardOutput.toString()
println "the result value is: $result"
def error = standardOutput.toString()
println "the error value is: $error"
println "exit value: " + execResult.exitValue
}
}
if (Os.isFamily(Os.FAMILY_WINDOWS)) {
commandLine 'cmd', '/c', "test-failing-script.sh"
} else {
commandLine "sh", "-c", "test-failing-script.sh"
}
if (ENABLE_CUSTOM_OUTPUT) {
ext.output = {
return standardOutput.toString()
}
}
}
}
}
test-failing-script.sh
set -e
set -o pipefail
echo -e This message goes who knows where
echo This message goes to stderr >&2
echo This message goes to stdout >&1
#curl -sf -o /dev/null http://google.com
exit 1;
If the flag is disabled, I'm getting no output at all
If the flag is enabled, I'm getting
the result value is:
the error value is:
exit value: 0
I'm expecting exit value 1 and for both stdout and stderr some messages
Why can't I get the right response message from the called script?
It's some time ago, that I faced this issue. But I think the problem could be with sh -c not forwarding stdout and exit value. But I can't reproduce it right now. Might be it's fixed in a more recent version of Gradle. Try using Gradle 7.5
I am getting runtime value in build stage stage which I stored in an environment variable . I saved that to env.cfg file under WORKSPACE .
Now I am trying to get that value in post pipeline step to be used in email communication. I tried load method but it did not work
Any help ?
post {
always {
echo $SNAPSHOT / /this always comes null
}
}
This is the way you can access an environment variable across the pipeline
pipeline {
agent any;
environment {
MESSAGE="Hello World"
}
stages {
stage('one') {
steps {
echo "${env.MESSAGE}"
sh "echo $MESSAGE"
script {
print env.MESSAGE
}
}
}
}
post {
success {
echo "${env.MESSAGE}"
script {
print env.MESSAGE
}
}
failure {
echo "${env.MESSAGE}"
script {
print env.MESSAGE
}
}
}
}
but as per your scenario let say I have a file called .env with the content below in the current Jenkins job WORKSPACE and I want to read and make this env variable in the pipeline.
.env
export SNAPSHOT=1.0.0
export MESSAGE='Hello World'
export MESSAGE_FROM_ENV_FILE='Hello From .env file'
your pipeline should look like
scripted pipeline
node {
stage('one') {
sh """
source $WORKSPACE/.env
echo \$SNAPSHOT
echo \$MESSAGE
echo \$MESSAGE_FROM_ENV_FILE
"""
}
}
declarative pipeline
pipeline {
agent any;
stages {
stage('build') {
steps {
sh """
source $WORKSPACE/.env
echo \$SNAPSHOT
echo \$MESSAGE
echo \$MESSAGE_FROM_ENV_FILE
"""
}
}
}
post {
success {
sh """
source $WORKSPACE/.env
echo \$SNAPSHOT
echo \$MESSAGE
echo \$MESSAGE_FROM_ENV_FILE
"""
}
}
}
You need a global variable:
SNAPSHOT = ""
println "SNAPSHOT is ${SNAPSHOT}"
pipeline {
agent any
stages {
stage('Build') {
steps {
script {
println "SNAPSHOT is ${SNAPSHOT}"
SNAPSHOT = "Modified"
println "SNAPSHOT is now ${SNAPSHOT}"
}
}
}
}
post {
always {
echo "${SNAPSHOT}"
}
}
}
My Pipeline is generating a dynamic recipient list based on each Job execution.I'm trying to use that list which I set it as a Variable, to use in the 'To' section of the emailext plugin, the Problem is that the Content of the variable is not resolved once using the mailext part.
pipeline {
agent {
label 'master'
}
options {
timeout(time: 20, unit: 'HOURS')
}
stages {
stage('Find old Projects') {
steps {
sh '''
find $JENKINS_HOME/jobs/* -type f -name "nextBuildNumber" -mtime +1550|egrep -v "configurations|workspace|modules|promotions|BITBUCKET"|awk -F/ '{print $6}'|sort -u >results.txt
'''
}
}
stage('Generate recipient List') {
steps {
sh '''
for Project in `cat results.txt`
do
grep "mail.com" $JENKINS_HOME/jobs/$Project/config.xml|grep -iv "Ansprechpartner" | awk -F'>' '{print $2}'|awk -F'<' '{print $1}'>> recipientList.txt
done
recipientList=`sort -u recipientList.txt`
echo $recipientList
'''
}
}
stage('Generate list to Shelve or Delete') {
steps {
sh '''
for Project in `cat results.txt`
do
if [ -f "$JENKINS_HOME/jobs/$Project/nextBuildNumber" ]; then
nextBuildNumber=`cat $JENKINS_HOME/jobs/$Project/nextBuildNumber`
if [ $nextBuildNumber == '1' ]; then
echo "$JENKINS_HOME/jobs/$Project" >> jobs2Delete.txt
echo "$Project" >> jobList2Delete.txt
else
echo "$JENKINS_URL/job/$Project/shelve/shelveProject" >> Projects2Shelve.txt
echo "$Project" >> ProjectsList2Shelve.txt
fi
fi
done
'''
}
}
stage('Send email') {
steps {
emailext to: 'admin#mail.com',
from: 'jenkins#mail.com',
attachmentsPattern: 'ProjectsList2Shelve.txt,jobList2Delete.txt',
subject: "This is a subject",
body: "Hello\n\nAttached two lists of Jobs, to archive or delete,\nPlease Aprove or Abort the Shelving / Delition of the Projects:\n${env.JOB_URL}\n\nBlue Ocean:\n${env.RUN_DISPLAY_URL}\n\nyour Team"
}
}
stage('Aprove or Abort') {
steps {
input message: 'OK to Shelve and Delete projects? \n Review the jobs list (Projects2Shelve.txt, jobs2Delete.txt) sent to your email', submitter: 'someone'
}
}
stage('Shelve or Delete') {
parallel {
stage('Shelve Project') {
steps {
withCredentials([usernamePassword(credentialsId: 'XYZ', passwordVariable: 'PA', usernameVariable: 'US')]) {
sh '''
for job2Shelve in `cat Projects2Shelve.txt`
do
curl -u $US:$PA $job2Shelve
done
'''
}
}
}
stage('Delete Project') {
steps {
sh '''
for job2Del in `cat jobs2Delete.txt`
do
echo "Removing $job2Del"
done
'''
}
}
}
}
}
post {
success {
emailext to: "$recipientListTest",
from: 'jenkins#mail.com',
attachmentsPattern: 'Projects2Shelve.txt,jobs2Delete.txt',
subject: "This is a sbject",
body: "Hallo\n\nAttached two lists of Jobs which archived or deleted due to inactivity of more the 400 days\n\n\nyour Team"
}
}
}
I figured out that the only way would be to add a script part as part of the post section, together with a variable Definition outside of the Pipeline block:
post {
success {
script {
RECIPIENTLIST = sh(returnStdout: true, script: 'cat recipientListTest.txt')
}
emailext to: "${RECIPIENTLIST}",
from: 'jenkins#mail.com',
attachmentsPattern: 'Projects2Shelve.txt,jobs2Delete.txt',
subject: "MY SUBJECT",
body: "MY BODY"
}
when you execute a sh command, you cannot reuse the variables that you set within that command. You need to do something like this:
on top you your pipeline file to make this variable global
def recipientsList
then execute your shell command and retrieve the output
recipientsList = sh (
script: '''for Project in `cat results.txt`
do
grep "mail.com" $JENKINS_HOME/jobs/$Project/config.xml|grep -iv "Ansprechpartner" | awk -F'>' '{print $2}'|awk -F'<' '{print $1}'>> recipientList.txt
done
recipientList2=`sort -u recipientList.txt`
echo $recipientList2
''',
returnStdout: true
).trim()
Now in your email you can use the variable $recipientList...
I renamed your bash variable to recipientList2 to avoid confusion.
EDIT: I don't know what you want to obtain, but consider using some default recipients provided by emailext:
recipientProviders: [ developers(), culprits(), requestor(), brokenBuildSuspects(), brokenTestsSuspects() ],
What is the syntax of 'post' in scripted pipeline comparing to declarative pipeline?
https://jenkins.io/doc/book/pipeline/syntax/#post
For scripted pipeline, everything must be written programmatically and most of the work is done in the finally block:
Jenkinsfile (Scripted Pipeline):
node {
try {
stage('Test') {
sh 'echo "Fail!"; exit 1'
}
echo 'This will run only if successful'
} catch (e) {
echo 'This will run only if failed'
// Since we're catching the exception in order to report on it,
// we need to re-throw it, to ensure that the build is marked as failed
throw e
} finally {
def currentResult = currentBuild.result ?: 'SUCCESS'
if (currentResult == 'UNSTABLE') {
echo 'This will run only if the run was marked as unstable'
}
def previousResult = currentBuild.getPreviousBuild()?.result
if (previousResult != null && previousResult != currentResult) {
echo 'This will run only if the state of the Pipeline has changed'
echo 'For example, if the Pipeline was previously failing but is now successful'
}
echo 'This will always run'
}
}
https://jenkins.io/doc/pipeline/tour/running-multiple-steps/#finishing-up
You can modify #jf2010 solution so that it looks a little neater (in my opinion)
try {
pipeline()
} catch (e) {
postFailure(e)
} finally {
postAlways()
}
def pipeline(){
stage('Test') {
sh 'echo "Fail!"; exit 1'
}
println 'This will run only if successful'
}
def postFailure(e) {
println "Failed because of $e"
println 'This will run only if failed'
}
def postAlways() {
println 'This will always run'
}