Does azure pipeline 'command line' agent job inherit working directory from the previous job? - windows

My understanding about azure pipelines agent jobs was that:
Each job is independent
And that each 'command line' job runs in its own context with an independent scope.
But if the working directory of azure pipeline 'command line' is not set, then it defaults to the working directory from previous 'command line' agent job.

Before I answer the question I want to make sure the terminology is clear:
pipelines are the overall definition of your ci cd process, they can contain multiple stages.
stages are phases of your pipeline, like build, test, deploy... They can contain multiple jobs.
jobs are collections of tasks/steps needed to implement your process. They contain one or more task/step.
tasks or steps are the actual actions being executed like "execute this command" "build that dotnet project"...
The environment is reset between each job (meaning a new virtual machine will be used, sources pulled again etc.). Between each task or step that belong to the same job, you will keep the same environment and each task will "benefit" the outcome (files changed, environment variables...) From the previous ones.
In terms of working directory, they all default to the build.workingDirectory (see azure devops default variables).
If you set the working directory of one task to something different, it will not impact other tasks.

If you use Microsoft-hosted agent, each time you run a pipeline, you get a fresh virtual machine. The virtual machine is discarded after one use. Each job may use different agents, you should not assume that the state from an earlier job is available during subsequent.
And follow is a simple test about it.
I create two agent job in my pipeline, and add the command task and run the agent job one by one. In first command task, I create a .txt file in $( Agent.BuildDirectory) folder and then read it.
In the second command task, I just changed to the folder and try to read the .txt file.
At last, the second task failed and show me the error message.
If I set working directory in first task, and not set it in second task. The two tasks’ working directory is different.

Related

Error Raised Only When Using Jenkins Pipeline

I am trying to setup a Jenkins pipeline using scripted pipeline with a windows 2019 server, however I ran into this error while trying to build
> webpack --config ./config/webpack-cli-prod.config.js
C:\myProject\node_modules\webpack\lib\javascript\JavascriptModulesPlugin.js:143
throw new TypeError(
^
TypeError: The 'compilation' argument must be an instance of Compilation
at Function.getCompilationHooks (C:\myProject\node_modules\webpack\lib\javascript\JavascriptModulesPlugin.js:143:10)
at SourceMapDevToolModuleOptionsPlugin.apply (C:\myProject\node_modules\webpack\lib\SourceMapDevToolModuleOptionsPlugin.js:50:27)
at C:\myProject\node_modules\webpack\lib\SourceMapDevToolPlugin.js:163:53
at Hook.eval [as call] (eval at create (C:\myProject\node_modules\tapable\lib\HookCodeFactory.js:19:10), <anonymous>:100:1)
at Hook.CALL_DELEGATE [as _call] (C:\myProject\node_modules\tapable\lib\Hook.js:14:14)
at Compiler.newCompilation (C:\myProject\node_modules\webpack\lib\Compiler.js:1122:26)
at C:\myProject\node_modules\webpack\lib\Compiler.js:1166:29
at Hook.eval [as callAsync] (eval at create (C:\myProject\node_modules\tapable\lib\HookCodeFactory.js:33:10), <anonymous>:6:1)
at Hook.CALL_ASYNC_DELEGATE [as _callAsync] (C:\myProject\node_modules\tapable\lib\Hook.js:18:14)
at Compiler.compile (C:\myProject\node_modules\webpack\lib\Compiler.js:1161:28)
I tried to run same command/step using freestyle Jenkins job, and it works without this error.
I tried to run the same command on the Jenkins agent locally, and it works without this error.
I looked up on google, and came across this link here, I tried to use newer version of html-webpack-plugin, and we also tried to build without the plugin. All come to the same result, that the error would occur only when running from Jenkins scripted pipeline.
I also tried with a different server, while keeping the same agent and job configuration, and I also get the same error.
The version of npm is 8.11.0, the node version is 16.16.0. The agent is connected through running the agent.jar file from the agent.
The only difference I see between the freestyle job and the scripted pipeline job is the freestyle job appears to be run as SYSTEM by the Jenkins server, whereas the pipeline job is probably run with lower privilege (I am not entirely sure though). I saw also this post, where it says
in the Freestyle job everything is executed in the agent, but for the Scripted Pipeline Job, the pipeline code is translated in the controller to atomic commands that are sent to the agents.
But I have no idea how to make the scripted pipeline job run just like the freestyle job.
On one hand, it appears to have to do with webpack, and on the other it appears to be related to Jenkins since running freestyle and locally on the server is without errors.
This is how my Jenkins scripted pipeline looks like (with sensitive information removed)
node("My-Server"){
dir("C:\\MyProject"){
stage('Pre-Test Build Client (Web)') {
dir("aFolder"){
bat 'npm run build-all-prod' // This is the script that invoke the webpack build command
}
}
}
}
I have run out of options, and do not where to go from here, and I couldn't find any more information on google that would be helpful. Any help here would be really appreciated. Thank you.
I'm not sure if jenkins creates the same environment variables, and command line tools (you can configure some on the node's configuration page). I would check if the node's environment variables and tools are the same in freestyle vs pipeline job by running something like this in each job, and comparing the output:
bat 'echo %PATH%'
bat 'which webpack'
bat 'npm list webpack'
Another thing worth checking is whether you're using batch script in both jobs, and not eg shell in freestyle.
Lastly I found a gh issue whith the same error as you, caused by having 2 different installations of webpack, one on v5. May be worth looking into, if everything else fails.

How to run custom command on rundeck host in Rundeck?

I'm creating Rundeck job and I want to run a custom command (e.g. git pull) on Rundeck host itself within a job working on other nodes. I can see the Command or Script node steps, but is there a matching workflow step?
Context
I'm pretty new to Rundeck, so here's some context on what I'm trying to achieve, maybe my whole design is wrong. I'm pretty familiar with Ansible though and I'm trying to integrate it with Rundeck to treat Rundeck as an executor of Ansible scripts
We're developing some software product, which is on-prem solution and is quite complex to install (requires deep OS configuration). We want to develop it in Continous Delivery fasion, as our cloud products are. So in git repository, along the product, we keep Ansible workspace (playbooks, roles, requirements, custom tasks - everything exept inventory) and on every commit Ansible workspace should be compatible with particular product version.
My current approach is following: build pipeline publishes as artifacts both build of the product and zipped Ansible workspace. Whenever we want to deploy it, we would run Rundeck job, which:
downloads Ansible workspace from artifacts (alternative idea: pulls repository in proper commit)
runs Ansible playbook (via Ansible workflow step), which does the stuff on selected nodes
How can I perform this first step? From what I can see I can run script or command on nodes (but in particular job run nodes are the target machines, not rundeck host). There is also SCM git plugin for Rundeck, but it can load jobs from repository, not Ansible workspace
A good approach for that:
Integrate Rundeck with Ansible following this (consider that Rundeck and Ansible must coexist in the same server).
Create a new job, by default new Rundeck jobs are configured to run locally.
In The first step you can add a "script step" (inline script) moving to the desired directory and the git pull command (also, you can use this approach to cloning the repo if you need that).
Now, the next step could be the execution of your ansible playbook (Ansible Playbook Workflow Node Step) in your job.

sh file is not running on slave node in jenkins?

Hi I am fairly new to jenkins config and I am struck on running sh file on slave node. I have created two jobs one is creating some .sh and .jar file and other is copying it to all the slave node, after the build I need to run the .sh file which is running on local but not running on master. I am specifying the path but Jenkins is always running some blank .sh file from tmp folder.
where as in job config I have given this
the slave.sh file is present on remote slave but jenkins is not running it, what is the possible cause?
I really do not understand what you are trying to do. You have a very strange way to dividing the work and then executing part of it in a post-build step. There should be very few use-cases for using post-build step. Maybe you could just try to execute the slave.sh script in a normal build step? And maybe execute it directly from the source location without copying it to another location.
If I'm missing something and it really is necessary to execute slave.sh in a post-build step, please verify the path to the script is correct. There are several similar but slightly different paths in your question and I cannot say if that is on purpose but probably not.

Share Timestamp within Jenkins

I do have a jenkins job where I execute command from within Maven plugin which executes ant build script. The job also does 2 ant calls as there are 2 mirror servers. Something like this:
usr/bin/ant -v -d -f /utils_repo/build.xml ${target} -propertyfile /tmp/myjob/install.properties
Where Maven connects to each server and executes something similar.
My question is how can I share timestamp of when jenkins job starts within 2 instances of ant calls. In my ant job I have a backup build step before rolling in a new code, but I need to put the logic if dump/backup was done on the first host, do not do it on the second one as they do share mysql instance and core files on nfs mount, What happens right now is there is no logic and when second ant call runs dump on the second server it overwrites the previous dump from the first instance with the new data and updated mysql.
So I was thinking on creating a touch task to touch some file since I have shared directory between 2 servers, but I have the same build.xml for both server instances, so the touch will executed on the second ant call and overwrite the modification time of the first ant call.
I thought of if I could share jenkins timestamp property of when job starts within 2 ant jobs. Do not know if this is possible.
Thanks in advance for advise.
I suggest you should use the environment variable BUILD_NUMBER set by Jenkins and make it stored on you nfs, as a property file for instance.
So if that property file doesn't exists, or if the value of the property loaded from there doesn't match the environment variable set by Jenkins, it means the current node is the first to run for that Jenkins job. So it can do the backup. And that node would overwrite the shared property file with the current build number.
If the loaded property match the current build number, then it means the first run has already been done, no backup to do.
Implementation hints:
add the build number to the command line
usr/bin/ant -v -d -f /utils_repo/build.xml ${target} -propertyfile /tmp/myjob/install.properties -Dexpected.jenkins.build.number=$BUILD_NUMBER
use ant contrib if/then/else tasks: http://ant-contrib.sourceforge.net/tasks/tasks/index.html
write the property file with:
<echo file="/mnt/nfs/shared/jenkins.properties">jenkins.build.number=$expected.jenkins.build.number</echo>

Set global environment variables inside Xcode build phase run script

I'm using Jenkins to do continuous integration builds. I have quite a few jobs that have much of the same configuration code. I'm in the midst of pulling this all out into a common script file that I'd like to run pre and post build.
I've been unable to figure out how to set some environment variables within that script, so that both the Xcode build command, and the Jenkins build can see them.
Does anyone know if this is possible?
It is not possible to do exactly what you ask. A process cannot change the environment variables of another process. The pre and post and actual build steps run in different processes.
But you can create a script that sets the common environment variables and share that script between all your builds.
The would first call your shell to execute the commands in the script and then call xcodebuild:
# Note the dot in the beginning of the next line. It is not a typo.
. set_environment.sh
xcodebuild myawesomeapp.xcodeproj
The script could look like this:
export VARIABLE1=value1
export VARIABLE2=value2
How exactly your jobs will share the script depends on your environment and use case. You can
place the script in some well-known location on the Jenkins host or
place the script in the version controlled source tree if all your jobs share the same repository or
place the script in a repository of its own and make a Jenkins build which archives the script as a build artifact. All the other jobs would then use Copy Artifact plugin to get a copy of the script from the artifacts of script job.
From Apple's Technical Q&A QA1067 it appears that if you create the file /Users/YOU/.MacOSX/environment.plist and populate it with your desired environment variables that all processes (launched by the user with the environment.plist file in their home dir) will pick up these environment variables. You may need to restart your computer (or just log out and back in) before a newly launched process will pick up the variables.
This article also claims that Xcode will also pass these variables to a build phase script. I have not tested it yet but next time I restart my MacBook I will let you know if it worked.
From http://developer.apple.com/library/mac/#/legacy/mac/library/qa/qa1067/_index.html
Q: How do I set environment for all processes launched by a specific
user?
A: It is actually a fairly simple process to set environment variables
for processes launched by a specific user.
There is a special environment file which loginwindow searches for
each time a user logs in. The environment file is:
~/.MacOSX/environment.plist (be careful it's case sensitive). Where
'~' is the home directory of the user we are interested in. You will
have to create the .MacOSX directory yourself using terminal (by
typing mkdir .MacOSX). You will also have to create the environment
file yourself. The environment file is actually in XML/plist format
(make sure to add the .plist extension to the end of the filename or
this won't work).

Resources