use of bash env variable in post section of pipeline - bash

I have been reading as many posts as possible about this topic but none of them suggest working solutions for me, so, throwing it again to the community:
In a Jenkinsfile pipeline I have
steps {
(...)
sh script: '''
$pkgname #existing var
export report_filename=$pkgname'_report.txt'
(stuff is being written to the $report_filename file...)
'''
}
post {
always {
script {
//want to read the file with name carried by $report_filename
def report = readFile(file: env.report_filename, encoding: 'utf-8').trim()
buildDescription(report)
}
}
}
I don't manage to pass the value of the report_filename bash var on to the post > always > script section. Tried ${env.report_filename} (with/without single/double quotes), with/without env. and some other crazy things.
What am I doing wrong here?
Thanks.

May by it little bit not right.
create variable def var
use options returnStdout: true. And parse output. var = sh ( script " echo #existing var", returnStdout: true).split("\n")
use var[0] in stage readFile(file: var[0]...)
If u can use env, add:
environment {
VAR = sh (script " echo #existing var", returnStdout: true).split("\n") [0]
}
script {
//want to read the file with name carried by $report_filename
def report = readFile(file: env.VAR , encoding: 'utf-8').trim()
buildDescription(report)
}

I don't see why you don't simply declare the variables in Groovy right at the start.
I'm not too familiar with the language, and don't currently have a way to test this; but something like this:
def pkgname = "gunk"
def report_filename = "${pkgname}_report.txt"
steps {
(...)
sh script: """
# use triple double quotes so that Groovy variables are interpolated
# $pkgname #syntax error, take it out
(stuff is being written to the $report_filename file...)
"""
}
post {
always {
script {
//want to read the file with name carried by $report_filename
def report = readFile(file: env.report_filename, encoding: 'utf-8').trim()
buildDescription(report)
}
}
}

Related

How to download a list of `FastQ` files in `Nextflow` using `fromSRA` function?

I have a tsv file with various columns. One of the columns of interest for me is the run_accession column. It contains accession id of various genome data samples. I want to write a pipeline in Nextflow which reads accession ids from this file using the following command:
cut -f4 datalist.tsv | sed -n 2,11p
Output:
ERR2512385
ERR2512386
ERR2512387
ERR2512388
ERR2512389
ERR2512390
ERR2512391
ERR2512392
ERR2512393
ERR2512394
and feed this list of IDs into Channel.fromSRA method. So far, I have tried this:
#!/home/someuser/bin nextflow
nextflow.enable.dsl=2
params.datalist = "$baseDir/datalist.tsv"
process fetchRunAccession {
input:
path dlist
output:
file accessions
"""
cut -f4 $dlist | sed -n 2,11p
"""
}
process displayResult {
input:
file accessions
output:
stdout
"""
echo "$accessions"
"""
}
workflow {
accessions_p = fetchRunAccession(params.datalist)
result = displayResult(accessions_p)
result.view { it }
}
And I get this error:
Error executing process > 'fetchRunAccession'
Caused by:
Missing output file(s) `accessions` expected by process `fetchRunAccession
If I run just the first process it works well and prints 10 lines as expected. The second process is just a placeholder for the actual fromSRA implementation but I have not been able to use the output of first process as the input of second. I am very new to Nextflow and my code probably has some silly mistakes. I would appreciate any help in this matter.
The fromSRA function is actually a factory method. It requires either a project/study id, or one or more accession numbers, which must be specified as a list. A channel emitting accession numbers (like in your example code) will not work here. Also, it would be better to avoid spawning a separate job/process just to parse a small CSV file. Instead, just let your main Nextflow process do this. There's lots of ways to do this, but for CSV input I find using Nextflow's CsvSplitter class makes this easy:
import nextflow.splitter.CsvSplitter
nextflow.enable.dsl=2
def fetchRunAccessions( tsv ) {
def splitter = new CsvSplitter().options( header:true, sep:'\t' )
def reader = new BufferedReader( new FileReader( tsv ) )
splitter.parseHeader( reader )
List<String> run_accessions = []
Map<String,String> row
while( row = splitter.fetchRecord( reader ) ) {
run_accessions.add( row['run_accession'] )
}
return run_accessions
}
workflow {
accessions = fetchRunAccessions( params.filereport )
Channel
.fromSRA( accessions )
.view()
}
Note that Nextflow's ENA download URL was updated recently. You'll need the latest version of Nextflow (21.07.0-edge) to get this to run easily:
NXF_VER=21.07.0-edge nextflow run test.nf --filereport filereport.tsv

Jenkins DSL: job with different job histories

I have a parametrized job Dummy that works as expected. Then I have multiple jobs that call job A with a specific sets of parameters to run (job B for example).
Let's say job Dummy script is as follows:
def jobLabel = "dummy-" + env.JOB_BASE_NAME.replace('/', '-').replace(' ', '_') + "${PARAM}"
currentBuild.displayName = "Dummy ${PARAM}"
echo "Previous result: " + currentBuild.previousBuild.result
if (currentBuild.previousBuild.result.equals("SUCCESS")) {
error("Build failed because of this and that..")
} else {
echo "Dummy ${PARAM}!"
}
And job Test script as follows:
// in this array we'll place the jobs that we wish to run
def branches = [:]
def environments = [
'US',
'EU',
'AU'
]
environments.each { env ->
branches["Dummy Tests " + env]= {
def result = build job: 'Dummy', parameters: [
string(name:'PARAM', value: env)
]
echo "${result.getResult()}"
if (result.getResult().equals("SUCCESS")) {
echo "Success! " + env
} else if (result.getResult().equals("FAILURE")) {
echo "Failure! " + env
}
}
}
parallel branches
The previous job result is whatever ran last the previous time. I would like to somehow have the job history based on the parameter so I can detect when a specific job combination switches from failure to success and vice versa for notification purposes. I guess you could iterate the history for that but sounds too complicated for something that hopefully is common requirement. Any hints or ideas?

Trying to write a custom rule

I'm trying to write a custom rule for gqlgen. The idea is to run it to generate Go code from a GraphQL schema.
My intended usage is:
gqlgen(
name = "gql-gen-foo",
schemas = ["schemas/schema.graphql"],
visibility = ["//visibility:public"],
)
"name" is the name of the rule, on which I want other rules to depend; "schemas" is the set of input files.
So far I have:
load(
"#io_bazel_rules_go//go:def.bzl",
_go_context = "go_context",
_go_rule = "go_rule",
)
def _gqlgen_impl(ctx):
go = _go_context(ctx)
args = ["run github.com/99designs/gqlgen --config"] + [ctx.attr.config]
ctx.actions.run(
inputs = ctx.attr.schemas,
outputs = [ctx.actions.declare_file(ctx.attr.name)],
arguments = args,
progress_message = "Generating GraphQL models and runtime from %s" % ctx.attr.config,
executable = go.go,
)
_gqlgen = _go_rule(
implementation = _gqlgen_impl,
attrs = {
"config": attr.string(
default = "gqlgen.yml",
doc = "The gqlgen filename",
),
"schemas": attr.label_list(
allow_files = [".graphql"],
doc = "The schema file location",
),
},
executable = True,
)
def gqlgen(**kwargs):
tags = kwargs.get("tags", [])
if "manual" not in tags:
tags.append("manual")
kwargs["tags"] = tags
_gqlgen(**kwargs)
My immediate issue is that Bazel complains that the schemas are not Files:
expected type 'File' for 'inputs' element but got type 'Target' instead
What's the right approach to specify the input files?
Is this the right approach to generate a rule that executes a command?
Finally, is it okay to have the output file not exist in the filesystem, but rather be a label on which other rules can depend?
Instead of:
ctx.actions.run(
inputs = ctx.attr.schemas,
Use:
ctx.actions.run(
inputs = ctx.files.schemas,
Is this the right approach to generate a rule that executes a command?
This looks right, as long as gqlgen creates the file with the correct output name (outputs = [ctx.actions.declare_file(ctx.attr.name)]).
generated_go_file = ctx.actions.declare_file(ctx.attr.name + ".go")
# ..
ctx.actions.run(
outputs = [generated_go_file],
args = ["run", "...", "--output", generated_go_file.short_path],
# ..
)
Finally, is it okay to have the output file not exist in the filesystem, but rather be a label on which other rules can depend?
The output file needs to be created, and as long as it's returned at the end of the rule implementation in a DefaultInfo provider, other rules will be able to depend on the file label (e.g. //my/package:foo-gqlgen.go).

how to get detailed information about execution of a script in groovy

I found here a very good example of what I want:
Basically to be able to execute a String as a groovy script with an expression, but if the condition is false, I want to show detailed information about why it was evaluated as false.
EDIT
I want an utility method that work like this:
def expression = "model.book.title == \"The Shining\""
def output = magicMethod(expression)
// output.result: the exact result of executing expression
// output.detail: could be a string telling me why this expression returns true or false, similar to de image
I think it may be a combination of Eval.me + assert and to catch the exception in order to get details
Yeah, it works with assert, thanks for the idea #Justin Piper
here is the snippet:
def model = [model:[book:[title:"The Shinning"]]]
def magicMethod= { String exp ->
def out = [:]
out.result = Eval.x(model,"x.with{${exp}}")
try{
if(out.result){
Eval.x(model,"x.with{!assert ${exp}}")
}else{
Eval.x(model,"x.with{assert ${exp}}")
}
}catch(Throwable e){
out.detail = e.getMessage()
}
return out
}
def expression = "model.book.title == \"The Shining\""
def output = magicMethod(expression)
println "result: ${output.result}"
println "detail: ${output.detail}"

problem in imagemagick and grails

i have a new problem in image magick that look strange ..
i'm using mac osx snow leopard and i've installed image magick on it and it's working fine on command ..
but when i call it from the grails class like the following snippet it gives me
"Cannot run program "convert": error=2, No such file or directory"
the code is :-
public static boolean resizeImage(String srcPath, String destPath,String size) {
ArrayList<String> command = new ArrayList<String>(10);
command.add("convert");
command.add("-geometry");
command.add(size);
command.add("-quality");
command.add("100" );
command.add(srcPath);
command.add(destPath);
System.out.println(command);
return exec((String[])command.toArray(new String[1]));
}
private static boolean exec(String[] command) {
Process proc;
try {
//System.out.println("Trying to execute command " + Arrays.asList(command));
proc = Runtime.getRuntime().exec(command);
} catch (IOException e) {
System.out.println("IOException while trying to execute " );
for(int i =0 ; i<command.length; i++) {
System.out.println(command[i]);
}
return false;
}
//System.out.println("Got process object, waiting to return.");
int exitStatus;
while (true) {
try {
exitStatus = proc.waitFor();
break;
} catch (java.lang.InterruptedException e) {
System.out.println("Interrupted: Ignoring and waiting");
}
}
if (exitStatus != 0) {
System.out.println("Error executing command: " + exitStatus);
}
return (exitStatus == 0);
}
i've tried normal command like ls and it's ok so the problem is that grails can't find convert command itself.. is it a os problem or something?
(see lower for the answer)
I have run into the same problem. The issue appears to be something with Mac OS X specifically, as we have several Linux instances running without error. The error looks similar to the following:
java.io.IOException: Cannot run program "/usr/bin/ImageMagick-6.7.3/bin/convert /a/temp/in/tmpPic3143119797006817740.png /a/temp/out/100000726.png": error=2, No such file or directory
All the files are there, and in chmod 777 directories - and as you pointed out, running the exact command from the shell works fine.
My theory at this point is that imagemgick can not load some sort of library itself, and the "no such file" is in reference to an dylib or something along those lines.
I have tried setting LD_LIBRARY_PATH and a few others to no avail.
I finally got this working. Here is how I have it setup. I hope this helps.
The crux of the fix, for me, was I wrapped the 'convert' into a shell script, set a bunch of environment variables, and then call that shell script instead of convert directly:
(convertWrapper.sh)
export MAGICK_HOME=/usr/local/ImageMagick-6.7.5
export MAGICK_CONFIGURE_PATH=${MAGICK_HOME}/etc/ImageMagick:${MAGICK_HOME}/share/doc/ImageMagick/www/source
export PATH=${PATH}:${MAGICK_HOME}/bin
export LD_LIBRARY_PATH=${MAGICK_HOME}/lib:${LD_LIBRARY_PATH}
export DYLD_LIBRARY_PATH=${DYLD_LIBRARY_PATH}:${MAGICK_HOME}/lib
export MAGICK_TMPDIR=/private/tmp
echo "$#" >> /private/tmp/m.log 2>&1
/usr/local/ImageMagick-6.7.5/bin/convert -verbose "$#" >> /private/tmp/m.log 2>&1
(convertWrapper.sh)
Additionally, the convert call was doing some rather complicated stuff, so I added the parameter '-respect-parenthesis' (which may or may not have had an effect).
I am not sure how much of the environment variable setting is needed as I was stabbing in the dark for a while, but since this is only for my development box...
You need to work out what your PATH is set to when you run a command from Java. It must be different to the one you have when running from the terminal.
Are you running Grails (via Tomcat?) as a different user? It might have a different path to your normal user.
you might want to try one of the Image Plugins that are part of the grails ecosystem
http://www.grails.org/ImageTools+plugin
the grails path when the app is running in the server is probably different from running java from the command line
I do so:
Put "convert" file to /usr/bin
Then add to Config.groovy:
gk {
imageMagickPath = "/usr/bin/convert"
}
Then in my ImageService.groovy:
import org.springframework.web.context.request.RequestContextHolder as RCH
[..]
def grailsApplication = RCH.requestAttributes.servletContext.grailsApplication
def imPath = grailsApplication.config.gk.imageMagickPath
def command = imPath + " some_properties"
def proc = Runtime.getRuntime().exec(command)
So this way you get command like: /usr/bin/convert some_properties
And it works, but don't forget to put file "convert" to you location and use it with this location.

Resources