Set global environment variables inside Xcode build phase run script - xcode

I'm using Jenkins to do continuous integration builds. I have quite a few jobs that have much of the same configuration code. I'm in the midst of pulling this all out into a common script file that I'd like to run pre and post build.
I've been unable to figure out how to set some environment variables within that script, so that both the Xcode build command, and the Jenkins build can see them.
Does anyone know if this is possible?

It is not possible to do exactly what you ask. A process cannot change the environment variables of another process. The pre and post and actual build steps run in different processes.
But you can create a script that sets the common environment variables and share that script between all your builds.
The would first call your shell to execute the commands in the script and then call xcodebuild:
# Note the dot in the beginning of the next line. It is not a typo.
. set_environment.sh
xcodebuild myawesomeapp.xcodeproj
The script could look like this:
export VARIABLE1=value1
export VARIABLE2=value2
How exactly your jobs will share the script depends on your environment and use case. You can
place the script in some well-known location on the Jenkins host or
place the script in the version controlled source tree if all your jobs share the same repository or
place the script in a repository of its own and make a Jenkins build which archives the script as a build artifact. All the other jobs would then use Copy Artifact plugin to get a copy of the script from the artifacts of script job.

From Apple's Technical Q&A QA1067 it appears that if you create the file /Users/YOU/.MacOSX/environment.plist and populate it with your desired environment variables that all processes (launched by the user with the environment.plist file in their home dir) will pick up these environment variables. You may need to restart your computer (or just log out and back in) before a newly launched process will pick up the variables.
This article also claims that Xcode will also pass these variables to a build phase script. I have not tested it yet but next time I restart my MacBook I will let you know if it worked.
From http://developer.apple.com/library/mac/#/legacy/mac/library/qa/qa1067/_index.html
Q: How do I set environment for all processes launched by a specific
user?
A: It is actually a fairly simple process to set environment variables
for processes launched by a specific user.
There is a special environment file which loginwindow searches for
each time a user logs in. The environment file is:
~/.MacOSX/environment.plist (be careful it's case sensitive). Where
'~' is the home directory of the user we are interested in. You will
have to create the .MacOSX directory yourself using terminal (by
typing mkdir .MacOSX). You will also have to create the environment
file yourself. The environment file is actually in XML/plist format
(make sure to add the .plist extension to the end of the filename or
this won't work).

Related

What is a foolproof way to make environment variables available to a script?

I have this script on an Ubuntu VM that uses an environment variable that is set in a script in /etc/profile.d/appsetup.sh.
The variable is used in my script, server-up.sh, to start my Java app:
export path=$KEYSTORE_PATH
java -jar -Dsecurity.keystore.path=$path [jarfile]
If I echo the variable, it works:
$ echo $KEYSTORE_PATH
/etc/ssl/certs
And if I run the script on my own (sudo sh server-up.sh) it runs and uses the environment variable just fine.
However, when the script is executed from Jenkins' "Execute Shell" step (on the same VM), it apparently can't access the environment variable, even though supposedly it's available system-wide.
I've tried setting the owner of server-up.sh to both root and jenkins and Jenkins runs it either way, but in neither case does it get the environment variables. In Jenkins, I also tried using the command sudo -E /[path]/server-up.sh but then the job fails and an error says sudo: sorry, you are not allowed to preserve the environment.
I've googled a dozen times for various things, but almost everything that comes up is people asking how to set environment variables in Jenkins, and I don't need to do that; I just want a script that Jenkins can execute have access to system environment variables.
What do I need to do to get this working?
Make a small change to allow the /etc/profile.d/appsetup.sh script to output the variable to a file, where the Jenkins job can access this variable to create an environment variable available for your job to run successfully.
I don't think the context and needs are explained sufficiently well to properly answer the question with a here's how you do it.
On a server, jenkins.war launches from a shell (or a root shell which invokes a shell script with commands which launches jenkins), which has an environment and to which you can set and pass parameters. Those exist in the context of the jenkins.war process. If you run from a daemon (initd / systemd) you get a non-in=teractive shell, which is set differently to your normal shell.
Your Jenkins will typically have Nodes launching agents on remote servers. Those are typically launched via a non-interactive shell (so no user .profile settings).
Then the jobs themselves run on one of the agents where the executor launches a shell for the job execution. Sub-shells may be launched for specific steps.
The two context you mention sudo sh server-up.sh and Jenkins' "Execute Shell" step (on the same VM), even on the same VM do not inherit the same environment as the Node is launched on it's own process using a non-interactive shell and is not aware of anything in your server-up.sh script; it (generally) just gets the /etc/profile.
You have options. You can set Global variables within Jenkins: ${JENKINS_URL}/configure
Global Properties
[ X ] Environment variables
[ X ] Prepare jobs environment (requires Env Inject plugin)
The same options also exist at the Node level
You can install the slaves-setup plugin, which allows you some customization when launching agents (aka slaves).
You can install the Environment Injector plugin, which adds the previously mentioned Prepare jobs environment feature.
It also adds jobs specific configuration options for:
[ X ] Prepare an environment for the run
Plus under the Build Environment section,
[ X ] Inject environment variables to the build process
and
[ X ] Inject passwords to the build as environment variables
Those are encrypted, and I believe are masked
Finally, you can add a build step to Inject environment variables, useful if you need to have different values for different steps.
BUT it's certs in a keystore!
Given that you also mention what you are trying to make available is $KEYSTORE_PATH=/etc/ssl/certs, I wonder if you've explored using the Credentials plugin? Is supports a wide variety of credential types, including:
Password
Username and password
SSH private key
Public Certificate and private key
Binary blob data
That OAuth thingy
The obvious benefit to using this approach vs cooking your own is it's been designed to work securely with Jenkins so your secrets don't get inadvertently exposed. Aside from the extensive documentation on the plugin site, there's more in the book on Using credentials, using them in a pipeline which also mentions the Snippet generator will stub it for you, and on the Cloudbees site - Injecting secrets into builds. You can probably find plenty of help here in S/O and DevOps.
You may also wish to explore the newly introduced Git credentials binding for sh, bat, and powershell, though not sure that's applicable in your case.

Does azure pipeline 'command line' agent job inherit working directory from the previous job?

My understanding about azure pipelines agent jobs was that:
Each job is independent
And that each 'command line' job runs in its own context with an independent scope.
But if the working directory of azure pipeline 'command line' is not set, then it defaults to the working directory from previous 'command line' agent job.
Before I answer the question I want to make sure the terminology is clear:
pipelines are the overall definition of your ci cd process, they can contain multiple stages.
stages are phases of your pipeline, like build, test, deploy... They can contain multiple jobs.
jobs are collections of tasks/steps needed to implement your process. They contain one or more task/step.
tasks or steps are the actual actions being executed like "execute this command" "build that dotnet project"...
The environment is reset between each job (meaning a new virtual machine will be used, sources pulled again etc.). Between each task or step that belong to the same job, you will keep the same environment and each task will "benefit" the outcome (files changed, environment variables...) From the previous ones.
In terms of working directory, they all default to the build.workingDirectory (see azure devops default variables).
If you set the working directory of one task to something different, it will not impact other tasks.
If you use Microsoft-hosted agent, each time you run a pipeline, you get a fresh virtual machine. The virtual machine is discarded after one use. Each job may use different agents, you should not assume that the state from an earlier job is available during subsequent.
And follow is a simple test about it.
I create two agent job in my pipeline, and add the command task and run the agent job one by one. In first command task, I create a .txt file in $( Agent.BuildDirectory) folder and then read it.
In the second command task, I just changed to the folder and try to read the .txt file.
At last, the second task failed and show me the error message.
If I set working directory in first task, and not set it in second task. The two tasks’ working directory is different.

Windows GitLab CI Runner using Bash

I'm trying to use bash as the shell on Windows for a GitLab CI Runner.
concurrent = 1
check_interval = 0
[[runners]]
name = "DESKTOP-RQTQ13S"
url = "https://example.org/ci"
token = "fooooooooooooooooooobaaaaaaaar"
executor = "shell"
shell = "bash"
[runners.cache]
Unfortunately I can not find an option to specify the actual shell program that the CI Runner should use. By default, it just tries to run bash which it can not find. I don't know why, because when I open up a Windows command line and enter bash it works.
Running with gitlab-ci-multi-runner 1.9.4 (8ce22bd)
Using Shell executor...
ERROR: Build failed (system failure): Failed to start process: exec: "bash": executable file not found in %PATH%
I tried adding a file bash.cmd to my user directory containing
#"C:\Program Files\Git\usr\bin\bash.exe" -l
That gives me this strange error:
Running with gitlab-ci-multi-runner 1.9.4 (8ce22bd)
Using Shell executor...
Running on DESKTOP-RQTQ13S...
/usr/bin/bash: line 43: /c/Users/niklas/C:/Users/niklas/builds/aeb38de4/0/niklas/ci-test.tmp/GIT_SSL_CAINFO: No such file or directory
ERROR: Build failed: exit status 1
Is there a way to properly configure this?
There are two issues going on here, and both can probably be solved.
gitlab-runner cannot find bash
gitlab-runner doesn't combine unix-style and Windows-style paths very well.
You have essentially succeeded in solving the first one by creating the bash.cmd file. But if you're curious about why it didn't work without it, my guess is that bash runs in your command prompt because the directory that contains it (e.g. in your case "C:\Program Files\Git\usr\bin") is included in the PATH environment variable for your user account. But perhaps you are running the gitlab-runner in the system account, which might not have the same PATH.
So the first thing to do is just check your system's PATH variable and add the bin directory if necessary (i.e. using the System applet in the Control Panel as described here or here). Just make sure you restart your machine after you make the change, because the change isn't applied until after you restart. That should make bash work, even when called from a service running in the system or admin account.
As for the strange error you got after creating bash.cmd, that was due to the second issue. Paths are often really hard to get right when combining bash and Windows. Gitlab-runner is probably trying to determine whether the build path is relative or absolute, and ends up prepending the windows path with what it thinks is the working directory ($PWD). This looks like a bug, but gitlab still has not fixed it (as of version 9.0 of the runner!!) and probably never will. Maybe they have decided it is not a bug or that it is due to bugs in underlying software or tools that they can't fix or that it would be too difficult to fix. Anyway, I've discovered a work-around. You can specify the base path for builds in the config.toml file. If you use a unix-style path, it fixes the problem.
On windows, config.toml is usually in the same folder as your gitlab-runner.exe (or gitlab-multi-runner-amd64.exe etc). Open that file in your favorite text editor. Then find the [[runners]] section and add two lines similar to the following.
builds_dir="/c/gitlab-runner/builds/"
cache_dir="/c/gitlab-runner/cache/"
The path you use should be the "bash version" of whatever directory you want gitlab-runner to use for storing builds etc. Importantly if you are using cygwin, you would use a path similar to /cygdrive/c/... instead of just /c/... (which is appropriate for msys-git or standalone MSYS2 etc).
Here's an example of a config.toml file:
[[runners]]
name = "windows"
url = "https://your.server.name"
token = "YOUR_SECRET_TOKEN"
executor = "shell"
shell = "bash"
builds_dir="/c/gitlab-runner/builds/"
cache_dir="/c/gitlab-runner/cache/"
It looks like you're attempting to link gitlab-ci up with the Windows Subsystem for Linux (which can be accessed by typing bash at the Windows command prompt)? I doubt that this is supported directly by Gitlab's runner configuration.
Instead, I would suggest using Powershell with your shell executor.
Executor = 'shell'
Shell = 'powershell'
You can then drop down into Bash in the scripts you call from .gitlab-ci.yml.
Given that it's bad practice to execute more than very trivial shell scripts within the .gitlab-ci.yml itself (as opposed to calling out to an external script), you lose little by being forced to use a native Windows shell.

Run Jenkins' Cygwin script as user

I have Jenkins running on Windows, and I have a build that works fine under CygWin bash from the CygWin terminal, so I now want to automate it. However, using this script:
#!C:\cygwin\bin\bash.exe
whoami
make
The system reports me as nt authority\system, not the ken that I get when using an interactive shell. Is there an easy way to persuade Jenkins or CygWin to run as me?
Most likely you are running jenkins with default installation. You have two options. First is mentioned in the comment. Change the "Service account" to be same as yours.
Second option is derived from best practices. Run the jenkins master on a system with backup etc. Configure slave node with your account credentials. Change the project configuration to build on the specific node.
(It is possible to run slave and master on same machine with different credentials - just in case you want to try out things)
The real problem I was having was not that the shell script was running as the wrong user, but that the shell script was not executing the default /etc/profile. So, the solution was simply:
#!C:\cygwin\bin\bash.exe -l
whoami
make
I was still nt authority\system, but now I had the correct environment set up and could run make successfully.
Note also that if I create a /home/system directory I can add .bash_profile, etc, to that directory to further customise the build environment.

Jenkins is not picking up Environment variable to be used in windows batch script

I am building a visual studio solution containing number of project. I wanted to disable multiprocess build, so , i tried setting an enviroment variable CL to /MP1. But it didn't worked in Jenkins while working in running the batch script for building solution using command line.
Good morning,
Log to your Jenkins server, and stop the Jenkins from the command line. While doing this, open your web-browser and refresh the Jenkins webpage to make sure it stopped(it will take around 5 seconds to stop the service). Then start again from the command line, it will update the variable. I did yesterday, to run my unit tests. It should work.
To set environment variables for individual projects, use the checkbox 'Prepare an environment for the run' and set what environment variables you want in the format 'ENV=value' in the Properties content box.
Otherwise, all I can suggest is that you haven;t restarted the Jenkins service after setting your variable in Windows.
You can also used the EnvInject plugin, it works well.
https://wiki.jenkins-ci.org/display/JENKINS/EnvInject+Plugin

Resources