I am using urfave at https://github.com/urfave/cli
to create a CLI with two subcommands.
I am able to create a CLI with a subcommand,
but I really have no idea how to define the flags.
What's the difference between the global flag and local flag?
Each command can optionally specify a 'subcommand'. The subcommand is of type Command, which allows for nested / composing commands together.
To achieve something like:
cli-tool command1 command2 --command2flag
you could have a commands structure like:
app := &cli.App{
//...
Commands: []*cli.Command{
{
Name: "command1",
Usage: // ...
Action: //...
SubCommand: []cli.Command{
{
Name: "command2"
Flags: []cli.Flag{
cli.StringFlag{
Name: "command2flag"
// ...
},
},
},
},
},
//...
}
You can see here that command2 is nested in command1's subcommands. And the flags for command2 will only apply to command2. This is an example of a local flag.
Global flags would apply to every command and subcommand. This could be useful for somekind of config that the cli tool might need to use for all commands. e.g. the server address to talk to etc.
Related
How to get the output of kubectl describe deployment nginx | grep Image in an environment variable?
My code:
stage('Deployment'){
script {
sh """
export KUBECONFIG=/tmp/kubeconfig
kubectl describe deployment nginx | grep Image"""
}
}
In this situation, you can access the environment variables in the pipeline scope within the env object, and assign values to its members to initialize new environment variables. You can also utilize the optional returnStdout parameter to the sh step method to return the stdout of the method, and therefore assign it to a Groovy variable (because it is within the script block in the pipeline).
script {
env.IMAGE = sh(script: 'export KUBECONFIG=/tmp/kubeconfig && kubectl describe deployment nginx | grep Image', returnStdout: true).trim()
}
Note you would also want to place the KUBECONFIG environment variable within the environment directive at the pipeline scope instead (unless the kubeconfig will be different in different scopes):
pipeline {
environment { KUBECONFIG = '/tmp/kubeconfig' }
}
You can use the syntax:
someVariable = sh(returnStdout: true, script: some_script).trim()
For flags I can specify description which appers in --help command
flag.String("a", "", "Is is a flag")
But I don't have flags, only arguments, I use cli like this
mycommand start 4
Is it possible use --help to see description to "start" (and other) arguments?
Since this is not directly supported by flags, I know only of alecthomas/kong which does include argument usage:
package main
import "github.com/alecthomas/kong"
var CLI struct {
Rm struct {
Force bool `help:"Force removal."`
Recursive bool `help:"Recursively remove files."`
Paths []string `arg:"" name:"path" help:"Paths to remove." type:"path"`
} `cmd:"" help:"Remove files."`
Ls struct {
Paths []string `arg:"" optional:"" name:"path" help:"Paths to list." type:"path"`
} `cmd:"" help:"List paths."`
}
func main() {
ctx := kong.Parse(&CLI)
switch ctx.Command() {
case "rm <path>":
case "ls":
default:
panic(ctx.Command())
}
}
You will get with shell --help rm:
$ shell --help rm
usage: shell rm <paths> ...
Remove files.
Arguments:
<paths> ... Paths to remove. <====== "usage" for cli arguments (not flags)!
Flags:
--debug Debug mode.
-f, --force Force removal.
-r, --recursive Recursively remove files.
I am looking for a solution to inject secrets only during a Jenkins step:
application.properties:
spring.datasource.username=mySecretValue
spring.datasource.password=mySecretValue
...
Current State:
stage('Test') {
agent {
docker {
image 'myregistry.com/maven:3-alpine'
reuseNode true
}
}
steps {
configFileProvider([configFile(fileId: 'maven-settings-my-services', variable: 'MAVEN_SETTINGS')]) {
sh 'mvn -s $MAVEN_SETTINGS verify'
}
}
...
Thanks!
Option 1) Add a password job parameter for that secret. But the job have to be run manually, because need someone to input the secret.
// write the secret to application.property at any stage that
// prior to test and deployment stage
sh "echo spring.datasource.password=${params.DB_PASSWORD} >> application.property"
Option 2) Add the secret as Jenkins String Text credential. But adding credential needs Jenkins administrator access and also need considering update in future.
stage('test or deployment') {
environment {
DB_PASSWORD = credentials('<credential_id_of_the_secret>')
}
steps {
sh "echo spring.datasource.password=${env.DB_PASSWORD} >> application.property"
}
}
One way I did it, was to attach the secrets with the credentials-Plugin variable by variable:
echo 'Attach properties for tests to property file:'
withCredentials([string(credentialsId: 'DB_PW', variable: 'SECRET_ENV')]) {
sh 'echo spring.mydatabase.password=${SECRET_ENV} >> ./src/main/resources/application.properties'
Instead of "echo", "sed" would also an option to replace the empty value for the key instead of add the property to the end of the file.
The second way I did is to attach a complete property file, instead of a key/value pair. The property file contains all needed properties for the tests:
echo 'Attach properties file for test runs:' withCredentials([file(credentialsId: 'TEST_PROPERTIES', variable: 'APPLICATION_PROPERTIES')]) { dir('$WORKSPACE') {
sh 'sed s#'/src/main/resources/' application.properties > TEST_PROPERTIES'
In both cases the secrets has to be deleted atter the run, otherwise they can be viewed in plaintext under the Workspace folder.
I want to use defined parameters in Jenkinsfile in several shell commands, but I get an exception. In my example I want to execute a simple docker command. The parameter defines the path to docker executable.
This is my very short Jenkinsfile:
pipeline {
agent any
parameters {
string(defaultValue: '/Applications/Docker.app/Contents/Resources/bin/docker', description: '', name: 'docker')
}
stages {
stage('Test') {
steps {
sh 'sudo ${params.docker} ps -a'
}
}
}
}
And I get the following exception:
[e2e-web-tests_master-U4G4QJHPUACAEACYSISPVBCMQBR2LS5EZRVEKG47I2XHRI54NCCQ] Running shell script
/Users/Shared/Jenkins/Home/workspace/e2e-web-tests_master-U4G4QJHPUACAEACYSISPVBCMQBR2LS5EZRVEKG47I2XHRI54NCCQ#tmp/durable-e394f175/script.sh: line 2: ${params.docker}: bad substitution
When I change the Jenkinsfile without using the paramter inside the shell command it passes successfully:
pipeline {
agent any
parameters {
string(defaultValue: '/Applications/Docker.app/Contents/Resources/bin/docker', description: '', name: 'docker')
}
stages {
stage('Test') {
steps {
sh 'sudo /Applications/Docker.app/Contents/Resources/bin/docker ps -a'
}
}
}
}
So, how can I use parameters inside a shell command in Jenkinsfile? I tried string and text as parameter types.
The issue you have is that single quotes are a standard java String.
Double quotes are a templatable String, which will either return a GString if it is templated, or else a standard Java String.
So it you use double quotes:
stages {
stage('Test') {
steps {
sh "sudo ${params.docker} ps -a"
}
}
}
then params.docker will replace the ${params.docker} inside the 'sh' script in the pipeline.
If you want to put " inside the "sudo ${params.docker} ps -a" it doesn't work like bash (which is confusing) you use java style escaping, so "sudo \"${params.docker}\" ps -a"
I have a question related to drmma and the cluster config file in snakemake.
Currently i have a pipeline and I submit jobs to the cluster using drmma with the following command:
snakemake --drmaa " -q short.q -pe smp 8 -l membycore=4G" --jobs 100 -p file1/out file2/out file3/out
The problem is that some of the rules/jobs require less or more resources. I though that if i used the json cluster file I would be able to submit the jobs with different resources. My json file looks like this:
{
"__default__":
{
"-q":"short.q",
"-pe":"smp 1",
"-l":"membycore=4G"
},
"job1":
{
"-q":"short.q",
"-pe":"smp 8",
"-l":"membycore=4G"
},
"job2":
{
"-q":"short.q",
"-pe":"smp 8",
"-l":"membycore=4G"
}
}
When I run the following command my jobs (job1 and job2) are submitted with default options and not with the custom ones:
snakemake --jobs 100 --cluster-config cluster.json --drmaa -p file1/out file2/out file3/out
What am I doing wrong? Is it that I cannot combine the drmaa option with the cluster-config file?
the cluster config file simply allows you do define variables that are later used in --cluster/--cluster-sync/--drmaa depending on the defined placeholders. There's no DRMAA specific magic involved here. Have a look at the corresponding section in the documentation again.
Maybe an example makes things clearer:
Cluster config:
{
"__default__":
{
"time" : "02:00:00",
"mem" : 1G,
},
# more rule specific definitions here...
}
Example snakemake arguments to make use of the above:
--drmaa " -pe OpenMP {threads} -l mem_free={cluster.mem} -l h_rt={cluster.time}"
or
--cluster-sync "qsub -sync y -pe OpenMP {threads} -l mem_free={cluster.mem} -l h_rt={cluster.time}"
cluster.time and cluster.mem will be replaced accordingly per rule.
Andreas