I've been trying to find ways to cut my Jenkins build time as much as possible, and thanks to this helpful SO post, I found pbzip2: Utilizing multi core for tar+gzip/bzip compression/decompression
Works great! A 6 min compression time was brought down to 2 mins on my machine with the following:
tar -v -c --use-compress-program=pbzip2 -f parallel.tar.bzip2 myapplication.app
But Jenkins just barfs with a Execute Shell task where I put in the above command:
+ tar -v -c --use-compress-program=pbzip2 -f parallel.tar.bz2 myapplication.app
a myapplication.appBuild step 'Execute shell' marked build as failure
The fact that the "Build step" line is getting mashed together with the output from the tar tells me it might be a background process issue that tar/pbzip2 is introducing.
I've tried introducing a #!/bin/bash -xe and get the same results. I've tried wrapping the tar command in an if statement. I've also tried putting tar in a background thread itself with & and waiting for it. Same result.
Is there any techniques I could implement to help the Jenkins process out?
Found out that even though I can run this command as the jenkins user through command line, pbzip2 wasn't defined in the PATH for the Jenkins job. Pretty misleading since there wasn't useful output.
Related
I am using Azure Devops to build and deploy my git repo to a third party vps. I do this by logging into the server from Azure Devops through SSH, executing a shell script to pull git repo, and build it with ie. vue-cli and Laravel.
When the bash script is executed I receive a lot of errors on nearly all commands although everything is succeeding - can anyone tell me how to get rid of these unless something is really failing (would be nice to fail if npm build exit with code 1 for instance).
See screenshot below.
Screenshots are only really helpful for visual issues. You can use PasteBin or etc to share long logs if necessary.
According to this issue Azure just follows the lead of whatever shell it's running code in. So, in Bash it continues unless explicitly told to stop.
To easily change this behavior you can add set -e (or set -o errexit) at the start of your script. The errexit option causes Bash to exit as soon as a command/etc returns a non-zero exit code.
Another worthy addition is the set -o pipefail option. If you've got any pipes like command1 | command2 this will return the first non-zero exit code from a chain of pipes of any length as the result. So, if command1 fails above but command2 succeeds it would return the failure code from command1 instead of overwriting it.
Finally, set -u (or -o nounset) causes an error when unset variables are encountered during parameter expansion. If running in a non-interactive shell, it will also exit.
Many scripts combine these by running set -euo pipefail at the beginning to stop them from running after the first problem is encountered.
If you want to explicitly force a bash script to exit you can use || and && accordingly. The expression command || exit will exit if the command fails and command && exit will exit if the command succeeds.
This seems to be one bug starting from npm V.3.10.8. You can check this discussion.
As a workaround you can add this script to package.json and run the command with --no-optional switch:
"optionalDependencies": {
"fsevents": "*"
},
Also, there's possibility that your NPM version is too old. You can use Node.js tool installer task with version spec = 12.x to install higher node.js and npm versions.
I am trying to setup Travis CI to build a latex report. When building the latex report some steps have to be repeated and so the first time they are called there is a non-zero return code.
My travis.yml so far is
language: R
before_install:
- tlmgr install index
script:
- latex report
- bibtex report
- latex report
- latex report
- dvipdf report.dvi report.pdf
However in Travis Docs it states
If script returns a non-zero exit code, the build is failed, but continues to run before being marked as failed.
So if my first latex report command has a non zero return code it will fail the build.
I would only like the build to fail if the last latex report or dvipdf report failed.
Does anyone have any idea or help?
Thanks in advance.
Just append || true to your command.
(complex) Example:
- (docker run --rm -v $(pwd)/example:/workdir stocker-alert || true) 2>&1 | tee >(cat) | grep 'Price change within 1 day'
The docker command returns < 0 (because it's a negative test), but we want to continue anyway
2>&1 - stderr is forwarded to stdin (to be picked up by grep later)
tee - the output is printed (for debugging) and forwarded to grep
Finally grep asserts if the output contains a required string. If not, grep returns > 0 failing the build.
If we wanted to ignore greps result we would need another ||true after grep.
Taken from schnatterer/stock-alert.
Not directly related with your original question but I had quite the same problem.
I found a solution using latexmk. This runs latex and bibtex as many times as needed.
If you look at my Travis configuration file :
https://github.com/73VW/TechnicalReport/blob/master/.travis.yml
You will see that you simply have to add it in apt dependencies.
Then you can run it like this: latexmk -pdf -xelatex [Your_latex_file]
I'd like to understand why when I execute the following command in my terminal it works, but when I run through a script it doesn't
the command when I run it in my terminal
./tparente & ps --no-headers -C tparente -o rss,vsz >> "mem_results"
The mem_result file has the rss and vsz written in it.
The command when I run it through my terminal is slightly modified, it is written like this:
sh ~/Documents/tparente & ps --no-headers -C tparente -o rss,vsz >> "mem_results"
There's an echo command that write some text in mem_results before the aforementioned command, those works.
And if I remove the no header flag, it writes the header in the file but not the result.
I know the script is run, because it produce a file at the end.
This has been bugging me for a couple hours now.
Thank you
Alex.
I think I may have found the answer.
After trying a couple of configuration of the command line: this one works:
./tparente & ps --no-headers -C tparente -o rss,vsz >> "mem_results"
The difference is subtle (there's no "sh")
This line is from the script; what I noticed is when I tried to run the script on it's and run a ps command in another terminal, the tparente process was is there. I don't know why, but my instinct told me to remove the sh and I did and it works.
If anyone has a proper explanation go ahead :)
I am using a shell script in Jenkins that, at a certain point, uploads a file to a server using curl. I would like to see whatever output curl produces but also check whether it is the output I expect. If it isn't, then I want to set the shell error code to > 0 so that Jenkins knows the script failed.
I first tried using curl -f, but this causes the pipe to be cut as soon as the upload fails and the error output never gets to the client. Then I tried something like this:
curl ...params... | tee /dev/tty | \
xargs -I{} test "Expected output string" = '{}'
This works from a normal SSH shell but in the Jenkins console output I see:
tee: /dev/tty: No such device or address
I'm not sure why this is since I thought Jenkins was communicating with the slave using a normal SSH shell. In any case, the whole xargs + test thing strikes me as a bit of a hack.
Is there a way to accomplish this in Jenkins so that I can see the output and also test whether it matches a specific string?
When Jenkins communicates with slave via SSH, there is no terminal allocated, and so there is no /dev/tty device for that process.
Maybe you can send it to /dev/stderr instead? It will be a terminal in an interactive session and some log file in non-interactive session.
Have you thought about using the Publish over SSH Plugin instead of using curl? Might save you some headache.
If you just copy the file from master to slave there is also a plugin for that, copy to slave Plugin.
Cannot write any comments yet, so I had to post it as an answer.
I'd like to know whether there's a way to kind of 'query' the state of an Maven execution from within a Shell script (used for build process).
The point is that I would like to let the whole build script fail as soon as a single error appears within one of the Shell scripts Maven executions.
e.g.
(0) mvn -f .../someDir clean
(1) mvn -f .../1/pom.xml install
(2) mvn -f .../2/pom.xml -PgenerateWadl
So if e.g. there occurs an error within (0), then (1) and (2) must no more be executed, but instead the build script should quit with an error message directly after (0).
I'm not THAT much into Shell scripting, but I know of the $? variable to get the return value of an earlier execution. But as Maven simply seems to write errors out to the console, this might not work, does it?
I would have liked to research more information concerning the "$?", but it's quite hard to google for it.
A simple way is to use the -e option.
set -e
mvn -f .../somedir clean
mvn -f .../otherdir install
mvn -f .../thirddir -PgenerateWadl package
This will automatically have the shell exit with error if any command in sequence exits with a non-zero status (unless it is part of an explicit test such as an if or while or a chain like a || b).
You can watch this happen by inserting a call to false between any of the Maven calls.
See the set POSIX spec for details on set -e.
mvn ... && mvn ... && mvn ...
The execution will only proceed if the previous one was successful.