We have an Ant script with a chown task as follows:
<chown owner="${user}" verbose="true">
<fileset dir="${dev-home}" includes="**/**"/>
<dirset dir="${dev-home}" includes="**/**"/>
</chown>
The task is failing, but just says that it fails without stating why or giving the command that was being executed. How can we debug this?
You may run your Ant file in debug mode by:
ant -debug -f yourbuildfile.xml
For debugging specific parts use the techniques described in this thread: Make ant quiet without the -q flag?
Check also Ant's manual for chown task which reads:
If you are working on a large number of files this may result in a command line that is too long for your operating system. If you encounter such problems, you should set the maxparallel attribute of this task to a non-zero value. The number to use highly depends on the length of your file names (the depth of your directory tree) and your operating system, so you'll have to experiment a little.
Related
I am running a Jenkins slave on a restricted environment. This environment will only allow me to execute files in a specific directory.
The problem I have is running simple batch commands.
The slave's java.io.tmpdir being AppData/Local/Temp, jenkins will copy my command in a temp bat file and attempt to run it, like such:
cmd /c call D:\Users\TastyWithPasta\AppData\Local\Temp\hudson8090039221524722157.bat
Here the issue becomes obvious, the command cannot be run due to restriction and the build fails.
Anybody working in a restricted environment and facing the same issues? What would be a good workaround?
Unfortunately, -Djava.io.tmpdir=newpath is not an option since this taps into the Java installation. Maybe there is a way to override it locally?
I'm trying to use bash as the shell on Windows for a GitLab CI Runner.
concurrent = 1
check_interval = 0
[[runners]]
name = "DESKTOP-RQTQ13S"
url = "https://example.org/ci"
token = "fooooooooooooooooooobaaaaaaaar"
executor = "shell"
shell = "bash"
[runners.cache]
Unfortunately I can not find an option to specify the actual shell program that the CI Runner should use. By default, it just tries to run bash which it can not find. I don't know why, because when I open up a Windows command line and enter bash it works.
Running with gitlab-ci-multi-runner 1.9.4 (8ce22bd)
Using Shell executor...
ERROR: Build failed (system failure): Failed to start process: exec: "bash": executable file not found in %PATH%
I tried adding a file bash.cmd to my user directory containing
#"C:\Program Files\Git\usr\bin\bash.exe" -l
That gives me this strange error:
Running with gitlab-ci-multi-runner 1.9.4 (8ce22bd)
Using Shell executor...
Running on DESKTOP-RQTQ13S...
/usr/bin/bash: line 43: /c/Users/niklas/C:/Users/niklas/builds/aeb38de4/0/niklas/ci-test.tmp/GIT_SSL_CAINFO: No such file or directory
ERROR: Build failed: exit status 1
Is there a way to properly configure this?
There are two issues going on here, and both can probably be solved.
gitlab-runner cannot find bash
gitlab-runner doesn't combine unix-style and Windows-style paths very well.
You have essentially succeeded in solving the first one by creating the bash.cmd file. But if you're curious about why it didn't work without it, my guess is that bash runs in your command prompt because the directory that contains it (e.g. in your case "C:\Program Files\Git\usr\bin") is included in the PATH environment variable for your user account. But perhaps you are running the gitlab-runner in the system account, which might not have the same PATH.
So the first thing to do is just check your system's PATH variable and add the bin directory if necessary (i.e. using the System applet in the Control Panel as described here or here). Just make sure you restart your machine after you make the change, because the change isn't applied until after you restart. That should make bash work, even when called from a service running in the system or admin account.
As for the strange error you got after creating bash.cmd, that was due to the second issue. Paths are often really hard to get right when combining bash and Windows. Gitlab-runner is probably trying to determine whether the build path is relative or absolute, and ends up prepending the windows path with what it thinks is the working directory ($PWD). This looks like a bug, but gitlab still has not fixed it (as of version 9.0 of the runner!!) and probably never will. Maybe they have decided it is not a bug or that it is due to bugs in underlying software or tools that they can't fix or that it would be too difficult to fix. Anyway, I've discovered a work-around. You can specify the base path for builds in the config.toml file. If you use a unix-style path, it fixes the problem.
On windows, config.toml is usually in the same folder as your gitlab-runner.exe (or gitlab-multi-runner-amd64.exe etc). Open that file in your favorite text editor. Then find the [[runners]] section and add two lines similar to the following.
builds_dir="/c/gitlab-runner/builds/"
cache_dir="/c/gitlab-runner/cache/"
The path you use should be the "bash version" of whatever directory you want gitlab-runner to use for storing builds etc. Importantly if you are using cygwin, you would use a path similar to /cygdrive/c/... instead of just /c/... (which is appropriate for msys-git or standalone MSYS2 etc).
Here's an example of a config.toml file:
[[runners]]
name = "windows"
url = "https://your.server.name"
token = "YOUR_SECRET_TOKEN"
executor = "shell"
shell = "bash"
builds_dir="/c/gitlab-runner/builds/"
cache_dir="/c/gitlab-runner/cache/"
It looks like you're attempting to link gitlab-ci up with the Windows Subsystem for Linux (which can be accessed by typing bash at the Windows command prompt)? I doubt that this is supported directly by Gitlab's runner configuration.
Instead, I would suggest using Powershell with your shell executor.
Executor = 'shell'
Shell = 'powershell'
You can then drop down into Bash in the scripts you call from .gitlab-ci.yml.
Given that it's bad practice to execute more than very trivial shell scripts within the .gitlab-ci.yml itself (as opposed to calling out to an external script), you lose little by being forced to use a native Windows shell.
Hi I am fairly new to jenkins config and I am struck on running sh file on slave node. I have created two jobs one is creating some .sh and .jar file and other is copying it to all the slave node, after the build I need to run the .sh file which is running on local but not running on master. I am specifying the path but Jenkins is always running some blank .sh file from tmp folder.
where as in job config I have given this
the slave.sh file is present on remote slave but jenkins is not running it, what is the possible cause?
I really do not understand what you are trying to do. You have a very strange way to dividing the work and then executing part of it in a post-build step. There should be very few use-cases for using post-build step. Maybe you could just try to execute the slave.sh script in a normal build step? And maybe execute it directly from the source location without copying it to another location.
If I'm missing something and it really is necessary to execute slave.sh in a post-build step, please verify the path to the script is correct. There are several similar but slightly different paths in your question and I cannot say if that is on purpose but probably not.
I do have a jenkins job where I execute command from within Maven plugin which executes ant build script. The job also does 2 ant calls as there are 2 mirror servers. Something like this:
usr/bin/ant -v -d -f /utils_repo/build.xml ${target} -propertyfile /tmp/myjob/install.properties
Where Maven connects to each server and executes something similar.
My question is how can I share timestamp of when jenkins job starts within 2 instances of ant calls. In my ant job I have a backup build step before rolling in a new code, but I need to put the logic if dump/backup was done on the first host, do not do it on the second one as they do share mysql instance and core files on nfs mount, What happens right now is there is no logic and when second ant call runs dump on the second server it overwrites the previous dump from the first instance with the new data and updated mysql.
So I was thinking on creating a touch task to touch some file since I have shared directory between 2 servers, but I have the same build.xml for both server instances, so the touch will executed on the second ant call and overwrite the modification time of the first ant call.
I thought of if I could share jenkins timestamp property of when job starts within 2 ant jobs. Do not know if this is possible.
Thanks in advance for advise.
I suggest you should use the environment variable BUILD_NUMBER set by Jenkins and make it stored on you nfs, as a property file for instance.
So if that property file doesn't exists, or if the value of the property loaded from there doesn't match the environment variable set by Jenkins, it means the current node is the first to run for that Jenkins job. So it can do the backup. And that node would overwrite the shared property file with the current build number.
If the loaded property match the current build number, then it means the first run has already been done, no backup to do.
Implementation hints:
add the build number to the command line
usr/bin/ant -v -d -f /utils_repo/build.xml ${target} -propertyfile /tmp/myjob/install.properties -Dexpected.jenkins.build.number=$BUILD_NUMBER
use ant contrib if/then/else tasks: http://ant-contrib.sourceforge.net/tasks/tasks/index.html
write the property file with:
<echo file="/mnt/nfs/shared/jenkins.properties">jenkins.build.number=$expected.jenkins.build.number</echo>
I've started using Doxygen to document my team's project source code (we have C#, Objective-C, and Android/Java projects). I wrote up a Windows batch script which checks out the latest trunk versions of each project and uses the command-line Doxygen to generate HTML documentation sites and publish to a directory on the local file system which IIS 7 already hosts. This batch script works perfectly and does everything it needs to, though it takes 10 - 20 minutes to run completely.
Now I'm trying to automate the process so that it will run at the end of every day. I added a scheduled task which simply runs the batch script. Every part of the script seems to work except for the Doxygen part. I can log into the machine and watch the file system and see that working copies are being checked out with no problem and the cleanup stuff works. However it never generates the Doxygen HTML output. The output/target directories Doxygen is configured to use will stay empty every time. I'm finding no error messages of any kind (in Scheduled Tasks and eventvwr). It doesn't work whether I let the task scheduler start it on its own or I tell it to run the task now. As said earlier, I can double-click the batch file and run it normally and everything works fine that way.
The process is done on our development server, it's an older Dell workstation running Windows Vista Business 32-bit. I have the scheduled task running on the "System" account though I have also tried "Local Service" and my own Active Directory domain account (which is an administrator on this server) and it still doesn't work.
Has anyone else successfully used the task scheduler to automate Doxygen? I have no idea what I'm doing wrong. What should I look for next?
I can post slightly anonymized versions of my batch file and Doxygen config files if necessary.
In your batch file, try adding redirection of the doxygen output to a log file. Then run it through the scheduler and see what output was generated. If doxygen encounters an error when run that way you should see it in the log file.
doxygen doxyfile > doxygen.log 2>&1
Also make sure that your bat file runs correctly, even if invoked from another directory than the one where the doxygen stuff resides. When run through the task scheduler, I think that the current directory will be c:\windows\system32, so try this:
c:\windows\system32>c:\path\to\batchfile\mybatch.bat
If that gives path errors you have to fix them.