I'm currently making a start on using Nextflow to develop a bioinformatics pipeline. Below, I've created a params.files variable which contains my FASTQ files, and then input this into fasta_files channel.
The process trimming and its scripts takes this channel as the input, and then ideally, I would output all the $sample".trimmed.fq.gz into the output channel, trimmed_channel. However, when I run this script, I get the following error:
Missing output file(s) `trimmed_files` expected by process `trimming` (1)
The nextflow script I'm trying to run is:
#! /usr/bin/env nextflow
params.files = files("$baseDir/FASTQ/*.fastq.gz")
println "fastq files for trimming:$params.files"
fasta_files = Channel.fromPath(params.files)
println "files in the fasta channel: $fasta_files"
process trimming {
input:
file fasta_file from fasta_files
output:
path trimmed_files into trimmed_channel
// the shell script to be run:
"""
#!/usr/bin/env bash
mkdir trimming_report
cd /home/usr/Nextflow
#Finding and renaming my FASTQ files
for file in FASTQ/*.fastq.gz; do
[ -f "\$file" ] || continue
name=\$(echo "\$file" | awk -F'[/]' '{ print \$2 }') #renaming fastq files.
sample=\$(echo "\$name" | awk -F'[.]' '{ print \$1 }') #renaming fastq files.
echo "Found" "\$name" "from:" "\$sample"
if [ ! -e FASTQ/"\$sample"_trimmed.fq.gz ]; then
trim_galore -j 8 "\$file" -o FASTQ #trim the files
mv "\$file"_trimming_report.txt trimming_report #moves to the directory trimming report
else
echo ""\$sample".trimmed.fq.gz exists skipping trim galore"
fi
done
trimmed_files="FASTQ/*_trimmed.fq.gz"
echo \$trimmed_files
"""
}
The script in the process works fine. However, I'm wondering if I'm misunderstanding or missing something obvious. If I've forgot to include something, please let me know and any help is appreciated!
Nextflow does not export the variable trimmed_files to its own scope unless you tell it to do so using the env output qualifier, however doing it that way would not be very idiomatic.
Since you know the pattern of your output files ("FASTQ/*_trimmed.fq.gz"), simply pass that pattern as output:
path "FASTQ/*_trimmed.fq.gz" into trimmed_channel
Some things you do, but probably want to avoid:
Changing directory inside your NF process, don't do this, it entirely breaks the whole concept of nextflow's /work folder setup.
Write a bash loop inside a NF process, if you set up your channels correctly there should only be 1 task per spawned process.
Pallie has already provided some sound advice and, of course, the right answer, which is: environment variables must be declared using the env qualifier.
However, given your script definition, I think there might be some misunderstanding about how best to skip the execution of previously generated results. The cache directive is enabled by default and when the pipeline is launched with the -resume option, additional attempts to execute a process using the same set of inputs, will cause the process execution to be skipped and will produce the stored data as the actual results.
This example uses the Nextflow DSL 2 for my convenience, but is not strictly required:
nextflow.enable.dsl=2
params.fastq_files = "${baseDir}/FASTQ/*.fastq.gz"
params.publish_dir = "./results"
process trim_galore {
tag { "${sample}:${fastq_file}" }
publishDir "${params.publish_dir}/TrimGalore", saveAs: { fn ->
fn.endsWith('.txt') ? "trimming_reports/${fn}" : fn
}
cpus 8
input:
tuple val(sample), path(fastq_file)
output:
tuple val(sample), path('*_trimmed.fq.gz'), emit: trimmed_fastq_files
path "${fastq_file}_trimming_report.txt", emit: trimming_report
"""
trim_galore \\
-j ${task.cpus} \\
"${fastq_file}"
"""
}
workflow {
Channel.fromPath( params.fastq_files )
| map { tuple( it.getSimpleName(), it ) }
| set { sample_fastq_files }
results = trim_galore( sample_fastq_files )
results.trimmed_fastq_files.view()
}
Run using:
nextflow run script.nf \
-ansi-log false \
--fastq_files '/home/usr/Nextflow/FASTQ/*.fastq.gz'
Related
This might be a very basic question for you guys, however, I am have just started with nextflow and I struggling with the simplest example.
I first explain what I have done and the problem.
Aim: I aim to make a workflow for my bioinformatics analyses as the one here (https://www.nextflow.io/example4.html)
Background: I have installed all the packages that were needed and they all work from the console without any error.
My run: I have used the same script as in example only by replacing the directory names. Here is how I have arranged the directories
location of script
~/raman/nflow/script.nf
location of Fastq files
~/raman/nflow/Data/T4_1.fq.gz
~/raman/nflow/Data/T4_2.fq.gz
Location of transcriptomic file
~/raman/nflow/Genome/trans.fa
The script
#!/usr/bin/env nextflow
/*
* The following pipeline parameters specify the refence genomes
* and read pairs and can be provided as command line options
*/
params.reads = "$baseDir/Data/T4_{1,2}.fq.gz"
params.transcriptome = "$baseDir/HumanGenome/SalmonIndex/gencode.v42.transcripts.fa"
params.outdir = "results"
workflow {
read_pairs_ch = channel.fromFilePairs( params.reads, checkIfExists: true )
INDEX(params.transcriptome)
FASTQC(read_pairs_ch)
QUANT(INDEX.out, read_pairs_ch)
}
process INDEX {
tag "$transcriptome.simpleName"
input:
path transcriptome
output:
path 'index'
script:
"""
salmon index --threads $task.cpus -t $transcriptome -i index
"""
}
process FASTQC {
tag "FASTQC on $sample_id"
publishDir params.outdir
input:
tuple val(sample_id), path(reads)
output:
path "fastqc_${sample_id}_logs"
script:
"""
fastqc "$sample_id" "$reads"
"""
}
process QUANT {
tag "$pair_id"
publishDir params.outdir
input:
path index
tuple val(pair_id), path(reads)
output:
path pair_id
script:
"""
salmon quant --threads $task.cpus --libType=U -i $index -1 ${reads[0]} -2 ${reads[1]} -o $pair_id
"""
}
Output:
(base) ntr#ser:~/raman/nflow$ nextflow script.nf
N E X T F L O W ~ version 22.10.1
Launching `script.nf` [modest_meninsky] DSL2 - revision: 032a643b56
executor > local (2)
executor > local (2)
[- ] process > INDEX (gencode) -
[28/02cde5] process > FASTQC (FASTQC on T4) [100%] 1 of 1, failed: 1 ✘
[- ] process > QUANT -
Error executing process > 'FASTQC (FASTQC on T4)'
Caused by:
Missing output file(s) `fastqc_T4_logs` expected by process `FASTQC (FASTQC on T4)`
Command executed:
fastqc "T4" "T4_1.fq.gz T4_2.fq.gz"
Command exit status:
0
Command output:
(empty)
Command error:
Skipping 'T4' which didn't exist, or couldn't be read
Skipping 'T4_1.fq.gz T4_2.fq.gz' which didn't exist, or couldn't be read
Work dir:
/home/ruby/raman/nflow/work/28/02cde5184f4accf9a05bc2ded29c50
Tip: view the complete command output by changing to the process work dir and entering the command `cat .command.out`
I believe I have an issue with my baseDir understanding. I am assuming that the baseDir is the one where I have my file script.nf I am not sure what is going wrong and how can I fix it.
Could anyone please help or guide.
Thank you
Caused by:
Missing output file(s) `fastqc_T4_logs` expected by process `FASTQC (FASTQC on T4)`
Nextflow complains when it can't find the declared output files. This can occur even if the command completes successfully, i.e. with exit status 0. The problem here is that fastqc simply skips files that don't exist or can't be read (e.g. permissions problems), but it does produce these warnings:
Skipping 'T4' which didn't exist, or couldn't be read
Skipping 'T4_1.fq.gz T4_2.fq.gz' which didn't exist, or couldn't be read
The solution is to just make sure all files exist. Note that the fromFilePairs factory method produces a list of files in the second element. Therefore quoting a space-separated pair of filenames is also problematic. All you need is:
script:
"""
fastqc ${reads}
"""
I have something like this on a Jenkinsfile (Groovy) and I want to record the stdout and the exit code in a variable in order to use the information later.
sh "ls -l"
How can I do this, especially as it seems that you cannot really run any kind of groovy code inside the Jenkinsfile?
The latest version of the pipeline sh step allows you to do the following;
// Git committer email
GIT_COMMIT_EMAIL = sh (
script: 'git --no-pager show -s --format=\'%ae\'',
returnStdout: true
).trim()
echo "Git committer email: ${GIT_COMMIT_EMAIL}"
Another feature is the returnStatus option.
// Test commit message for flags
BUILD_FULL = sh (
script: "git log -1 --pretty=%B | grep '\\[jenkins-full]'",
returnStatus: true
) == 0
echo "Build full flag: ${BUILD_FULL}"
These options where added based on this issue.
See official documentation for the sh command.
For declarative pipelines (see comments), you need to wrap code into script step:
script {
GIT_COMMIT_EMAIL = sh (
script: 'git --no-pager show -s --format=\'%ae\'',
returnStdout: true
).trim()
echo "Git committer email: ${GIT_COMMIT_EMAIL}"
}
Current Pipeline version natively supports returnStdout and returnStatus, which make it possible to get output or status from sh/bat steps.
An example:
def ret = sh(script: 'uname', returnStdout: true)
println ret
An official documentation.
quick answer is this:
sh "ls -l > commandResult"
result = readFile('commandResult').trim()
I think there exist a feature request to be able to get the result of sh step, but as far as I know, currently there is no other option.
EDIT: JENKINS-26133
EDIT2: Not quite sure since what version, but sh/bat steps now can return the std output, simply:
def output = sh returnStdout: true, script: 'ls -l'
If you want to get the stdout AND know whether the command succeeded or not, just use returnStdout and wrap it in an exception handler:
scripted pipeline
try {
// Fails with non-zero exit if dir1 does not exist
def dir1 = sh(script:'ls -la dir1', returnStdout:true).trim()
} catch (Exception ex) {
println("Unable to read dir1: ${ex}")
}
output:
[Pipeline] sh
[Test-Pipeline] Running shell script
+ ls -la dir1
ls: cannot access dir1: No such file or directory
[Pipeline] echo
unable to read dir1: hudson.AbortException: script returned exit code 2
Unfortunately hudson.AbortException is missing any useful method to obtain that exit status, so if the actual value is required you'd need to parse it out of the message (ugh!)
Contrary to the Javadoc https://javadoc.jenkins-ci.org/hudson/AbortException.html the build is not failed when this exception is caught. It fails when it's not caught!
Update:
If you also want the STDERR output from the shell command, Jenkins unfortunately fails to properly support that common use-case. A 2017 ticket JENKINS-44930 is stuck in a state of opinionated ping-pong whilst making no progress towards a solution - please consider adding your upvote to it.
As to a solution now, there could be a couple of possible approaches:
a) Redirect STDERR to STDOUT 2>&1
- but it's then up to you to parse that out of the main output though, and you won't get the output if the command failed - because you're in the exception handler.
b) redirect STDERR to a temporary file (the name of which you prepare earlier) 2>filename (but remember to clean up the file afterwards) - ie. main code becomes:
def stderrfile = 'stderr.out'
try {
def dir1 = sh(script:"ls -la dir1 2>${stderrfile}", returnStdout:true).trim()
} catch (Exception ex) {
def errmsg = readFile(stderrfile)
println("Unable to read dir1: ${ex} - ${errmsg}")
}
c) Go the other way, set returnStatus=true instead, dispense with the exception handler and always capture output to a file, ie:
def outfile = 'stdout.out'
def status = sh(script:"ls -la dir1 >${outfile} 2>&1", returnStatus:true)
def output = readFile(outfile).trim()
if (status == 0) {
// output is directory listing from stdout
} else {
// output is error message from stderr
}
Caveat: the above code is Unix/Linux-specific - Windows requires completely different shell commands.
this is a sample case, which will make sense I believe!
node('master'){
stage('stage1'){
def commit = sh (returnStdout: true, script: '''echo hi
echo bye | grep -o "e"
date
echo lol''').split()
echo "${commit[-1]} "
}
}
For those who need to use the output in subsequent shell commands, rather than groovy, something like this example could be done:
stage('Show Files') {
environment {
MY_FILES = sh(script: 'cd mydir && ls -l', returnStdout: true)
}
steps {
sh '''
echo "$MY_FILES"
'''
}
}
I found the examples on code maven to be quite useful.
All the above method will work. but to use the var as env variable inside your code you need to export the var first.
script{
sh " 'shell command here' > command"
command_var = readFile('command').trim()
sh "export command_var=$command_var"
}
replace the shell command with the command of your choice. Now if you are using python code you can just specify os.getenv("command_var") that will return the output of the shell command executed previously.
How to read the shell variable in groovy / how to assign shell return value to groovy variable.
Requirement : Open a text file read the lines using shell and store the value in groovy and get the parameter for each line .
Here , is delimiter
Ex: releaseModule.txt
./APP_TSBASE/app/team/i-home/deployments/ip-cc.war/cs_workflowReport.jar,configurable-wf-report,94,23crb1,artifact
./APP_TSBASE/app/team/i-home/deployments/ip.war/cs_workflowReport.jar,configurable-temppweb-report,394,rvu3crb1,artifact
========================
Here want to get module name 2nd Parameter (configurable-wf-report) , build no 3rd Parameter (94), commit id 4th (23crb1)
def module = sh(script: """awk -F',' '{ print \$2 "," \$3 "," \$4 }' releaseModules.txt | sort -u """, returnStdout: true).trim()
echo module
List lines = module.split( '\n' ).findAll { !it.startsWith( ',' ) }
def buildid
def Modname
lines.each {
List det1 = it.split(',')
buildid=det1[1].trim()
Modname = det1[0].trim()
tag= det1[2].trim()
echo Modname
echo buildid
echo tag
}
If you don't have a single sh command but a block of sh commands, returnstdout wont work then.
I had a similar issue where I applied something which is not a clean way of doing this but eventually it worked and served the purpose.
Solution -
In the shell block , echo the value and add it into some file.
Outside the shell block and inside the script block , read this file ,trim it and assign it to any local/params/environment variable.
example -
steps {
script {
sh '''
echo $PATH>path.txt
// I am using '>' because I want to create a new file every time to get the newest value of PATH
'''
path = readFile(file: 'path.txt')
path = path.trim() //local groovy variable assignment
//One can assign these values to env and params as below -
env.PATH = path //if you want to assign it to env var
params.PATH = path //if you want to assign it to params var
}
}
Easiest way is use this way
my_var=`echo 2`
echo $my_var
output
: 2
note that is not simple single quote is back quote ( ` ).
I am working on a large shell program and need a way to import functions from other scripts as required without polluting the global scope with all the internal functions from that script.
UPDATE: However, those imported functions have internal dependancies. So the imported function must be executed in the context of its script.
I came up with this solution and wonder if there is any existing strategy out there and if not, perhaps this is a really bad idea?
PLEASE TAKE A LOOK AT THE POSTED SOLUTION BEFORE RESPONDING
example usage of my solution:
main.sh
import user get_name
import user set_name
echo "hello $(get_name)"
echo "Enter a new user name :"
while true; do
read user_input < /dev/tty
done
set_name $user_input
user.sh
import state
set_name () {
state save "user_name" "$1"
}
get_name () {
state get_value "user_name"
}
As one approach, you could put a comment in the script to indicate where you want to stop sourcing:
$ cat script
fn() { echo "You are running fn"; }
#STOP HERE
export var="Unwanted name space pollution"
And then, if you are using bash, source it like this:
source <(sed '/#STOP HERE/q' script)
<(...) is process substitution and our process, sed '/#STOP HERE/q' script just extracts the lines from script until the stop line is reached.
Adding more precise control
We can select particular sections from a file if we add both start and stop flags:
$ cat script
export var1="Unwanted name space pollution"
#START
fn1() { echo "You are running fn1"; }
#STOP
export var2="More unwanted name space pollution"
#START
fn2() { echo "You are running fn2"; }
#STOP
export var3="More unwanted name space pollution"
And then source the file like this:
source <(sed -n '/#START/,/#STOP/p' script)
create standalone shel script that do this
will have 2 argument the file name and the function name
it will source the input file first
it will then use declare -f function name
in your code you can include functions like this
eval "./importfunctions.sh filename functionaname"
what is happening here :
step 1 basically read the file and source it in new shell environment . then it will echo the function declaration
step 2 will eval that function into our main code
So final result is as if we wrote just that function in our main script
When the functions in the script indent untill the closing } and all start with the keyword function, you can include specific functions without changing the original files:
largeshell.sh
#!/bin/bash
function demo1 {
echo "d1"
}
function demo2 {
echo "d2"
}
function demo3 {
echo "d3"
}
function demo4 {
echo "d4"
}
echo "Main code of largeshell... "
demo2
Now show how to source demo1() and forget demo4():
source <(sed -n '/^function demo1 /,/^}/p' largeshell.sh)
source <(sed -n '/^function demo3 /,/^}/p' largeshell.sh)
demo1
demo4
Or source all functions in a loop:
for f in demo1 demo3; do
echo sourcing $f
source <(sed -n '/^function '$f' /,/^}/p' largeshell.sh)
done
demo1
demo4
You can make it more fancy when you source a special script that will:
grep all strings starting with largeshell., like largefile.demo1
generate functions like largefile.demo1 that will call demo1
and source all functions that are called.
Your new script will look like
source function_includer.sh
largeshell.demo1
largeshell.demo4
EDIT:
You might want to reconsider your requirements.
Above solution is not only slow, but it will also make it hard for the
guys and ladies who made tha largeshell.sh. As soon as they are going to refactor their code or replace it with something in another language,
they have to refactor, test and deploy your code as well.
A better path is extracting the functions from largeshell.sh into some smaller files ("modules"), and put them in a shared directory (shlib?).
With names as sqlutil.sh, datetime.sh, formatting.sh, mailstuff.sh and comm.sh you can pick the includes file you need (and largefile.sh will include them all).
It's been a while and it would appear that my original solution is the best one out there. Thanks for the feedback.
So I read the easiest way to use .conf files for bash scripts is to use source to load such files. Now, what if I want to edit this file ?
Some code I found does a really good job :
function set_config(){
sed -i "s/^\($1\s*=\s*\).*\$/\1$2/" $conf_file
}
But, if the variable is not yet defined, it doesn't define it, nor does it check if the parameters are passed well, isn't secure, doesn't handle default values etc...
Does reliable tools/code already exists to edit .conf file which contain key="value" pairs ? For instance, I would like to be able to do things like this :
$conf_file="my_script.conf"
conf_load $conf_file #should create the file if it doesn't exist !
read=$(conf_get_value "data" "default_value") #should read the value with key "data", defaulting to "default_value"
if [[ $? = 0 ]] #we should be able to know if the read was successful
then
echo "Successfully read value for field \"data\" : $read"
else
echo "Default value for field \"data\" : $read"
fi
conf_set "something_new" "a great value!" #should add the key "something_new" as it doesn't exist
conf_set "data" "new_value" #should edit the value with key "data"
if [[ $? = 0 ]]
then
echo "Edit successful !"
else #something went wrong :-/
echo "Edit failed !"
fi
before running this code, the conf file would contain
data="some_value"
and after it would be
data="new_value"
something_new="a great value!"
and the code should output
Successfully read value for field "data" : some_value
Edit successful !
I am using bash version 4.3.30 .
Thanks for your help.
I'd to that with awk since it's rather good at tokenizing:
# overwrite config's entries for KEY with VALUE or else appends the definition
# Usage: set_config KEY VALUE
set_config() {
[ -n "$1" ] && awk -F= -v key="$1" -v new="$1=\"$2\"" '
$1 == key { $0 = new; key_found = 1; }
{ print }
END { if (!key_found) { print new; }
' "$conf_file" > "$conf_file.new" \
&& cat "$conf_file.new" > "$conf_file" && rm "$conf_file.new"
}
If run without arguments, set_config() will do nothing and return false. If run with only one argument, it will create an empty value (outputting KEY="").
The awk command parses the .conf file line by line, looking for each definition of the given key and altering it to the new value. All lines are then printed (with or without modification), preserving the original order. If the key hasn't yet been found by the end of the file, this appends the new definition.
Because you can't pipe a file atop itself, this gets saved with a ".new" extension and then copied atop the original in a manner that preserves permissions. The ".new" copy is then removed. I used && to ensure that these never happen if an error occurred earlier in the function.
Also note that the type of ".conf file" you're referring to (the type you source with a POSIX shell) will never have spaces around its equals signs, so the \s* parts of your sed command aren't needed.
I have a Bash script that builds a string to run as a command
Script:
#! /bin/bash
matchdir="/home/joao/robocup/runner_workdir/matches/testmatch/"
teamAComm="`pwd`/a.sh"
teamBComm="`pwd`/b.sh"
include="`pwd`/server_official.conf"
serverbin='/usr/local/bin/rcssserver'
cd $matchdir
illcommando="$serverbin include='$include' server::team_l_start = '${teamAComm}' server::team_r_start = '${teamBComm}' CSVSaver::save='true' CSVSaver::filename = 'out.csv'"
echo "running: $illcommando"
# $illcommando > server-output.log 2> server-error.log
$illcommando
which does not seem to supply the arguments correctly to the $serverbin.
Script output:
running: /usr/local/bin/rcssserver include='/home/joao/robocup/runner_workdir/server_official.conf' server::team_l_start = '/home/joao/robocup/runner_workdir/a.sh' server::team_r_start = '/home/joao/robocup/runner_workdir/b.sh' CSVSaver::save='true' CSVSaver::filename = 'out.csv'
rcssserver-14.0.1
Copyright (C) 1995, 1996, 1997, 1998, 1999 Electrotechnical Laboratory.
2000 - 2009 RoboCup Soccer Simulator Maintenance Group.
Usage: /usr/local/bin/rcssserver [[-[-]]namespace::option=value]
[[-[-]][namespace::]help]
[[-[-]]include=file]
Options:
help
display generic help
include=file
parse the specified configuration file. Configuration files
have the same format as the command line options. The
configuration file specified will be parsed before all
subsequent options.
server::help
display detailed help for the "server" module
player::help
display detailed help for the "player" module
CSVSaver::help
display detailed help for the "CSVSaver" module
CSVSaver Options:
CSVSaver::save=<on|off|true|false|1|0|>
If save is on/true, then the saver will attempt to save the
results to the database. Otherwise it will do nothing.
current value: false
CSVSaver::filename='<STRING>'
The file to save the results to. If this file does not
exist it will be created. If the file does exist, the results
will be appended to the end.
current value: 'out.csv'
if I just paste the command /usr/local/bin/rcssserver include='/home/joao/robocup/runner_workdir/server_official.conf' server::team_l_start = '/home/joao/robocup/runner_workdir/a.sh' server::team_r_start = '/home/joao/robocup/runner_workdir/b.sh' CSVSaver::save='true' CSVSaver::filename = 'out.csv' (in the output after "runnning: ") it works fine.
You can use eval to execute a string:
eval $illcommando
your_command_string="..."
output=$(eval "$your_command_string")
echo "$output"
I usually place commands in parentheses $(commandStr), if that doesn't help I find bash debug mode great, run the script as bash -x script
don't put your commands in variables, just run it
matchdir="/home/joao/robocup/runner_workdir/matches/testmatch/"
PWD=$(pwd)
teamAComm="$PWD/a.sh"
teamBComm="$PWD/b.sh"
include="$PWD/server_official.conf"
serverbin='/usr/local/bin/rcssserver'
cd $matchdir
$serverbin include=$include server::team_l_start = ${teamAComm} server::team_r_start=${teamBComm} CSVSaver::save='true' CSVSaver::filename = 'out.csv'
./me casts raise_dead()
I was looking for something like this, but I also needed to reuse the same string minus two parameters so I ended up with something like:
my_exe ()
{
mysql -sN -e "select $1 from heat.stack where heat.stack.name=\"$2\";"
}
This is something I use to monitor openstack heat stack creation. In this case I expect two conditions, an action 'CREATE' and a status 'COMPLETE' on a stack named "Somestack"
To get those variables I can do something like:
ACTION=$(my_exe action Somestack)
STATUS=$(my_exe status Somestack)
if [[ "$ACTION" == "CREATE" ]] && [[ "$STATUS" == "COMPLETE" ]]
...
Here is my gradle build script that executes strings stored in heredocs:
current_directory=$( realpath "." )
GENERATED=${current_directory}/"GENERATED"
build_gradle=$( realpath build.gradle )
## touch because .gitignore ignores this folder:
touch $GENERATED
COPY_BUILD_FILE=$( cat <<COPY_BUILD_FILE_HEREDOC
cp
$build_gradle
$GENERATED/build.gradle
COPY_BUILD_FILE_HEREDOC
)
$COPY_BUILD_FILE
GRADLE_COMMAND=$( cat <<GRADLE_COMMAND_HEREDOC
gradle run
--build-file
$GENERATED/build.gradle
--gradle-user-home
$GENERATED
--no-daemon
GRADLE_COMMAND_HEREDOC
)
$GRADLE_COMMAND
The lone ")" are kind of ugly. But I have no clue how to fix that asthetic aspect.
To see all commands that are being executed by the script, add the -x flag to your shabang line, and execute the command normally:
#! /bin/bash -x
matchdir="/home/joao/robocup/runner_workdir/matches/testmatch/"
teamAComm="`pwd`/a.sh"
teamBComm="`pwd`/b.sh"
include="`pwd`/server_official.conf"
serverbin='/usr/local/bin/rcssserver'
cd $matchdir
$serverbin include="$include" server::team_l_start="${teamAComm}" server::team_r_start="${teamBComm}" CSVSaver::save='true' CSVSaver::filename='out.csv'
Then if you sometimes want to ignore the debug output, redirect stderr somewhere.
For me echo XYZ_20200824.zip | grep -Eo '[[:digit:]]{4}[[:digit:]]{2}[[:digit:]]{2}'
was working fine but unable to store output of command into variable.
I had same issue I tried eval but didn't got output.
Here is answer for my problem:
cmd=$(echo XYZ_20200824.zip | grep -Eo '[[:digit:]]{4}[[:digit:]]{2}[[:digit:]]{2}')
echo $cmd
My output is now 20200824