Calling Sys.argv in Print Command in Python - python-2.6

I am a beginner programmer and i can't get this. Trying to print Sys.argv. I have the PRINT command set in script which should call sys.argv.
how do i run : python test.py device1
what i have tried :
print ('please Check' + sys.argv[1]_config.txt )
Current Output :
please Check + sys.argv[1]+_config.txt
what i am looking for :
print " please check device1_config.txt
Any Idea how to do this?

for a in range(len(sys.argv)):
# print(a)
if 'device' in sys.argv[a]:
print('please check device1_config.txt ')
Or to include it as a variable:
for a in range(len(sys.argv)):
# print(a)
if 'device' in sys.argv[a]:
dev = sys.argv[a]
print('please check {}_config.txt '.format(dev))

Related

How to return output of shell script into Jenkinsfile [duplicate]

I have something like this on a Jenkinsfile (Groovy) and I want to record the stdout and the exit code in a variable in order to use the information later.
sh "ls -l"
How can I do this, especially as it seems that you cannot really run any kind of groovy code inside the Jenkinsfile?
The latest version of the pipeline sh step allows you to do the following;
// Git committer email
GIT_COMMIT_EMAIL = sh (
script: 'git --no-pager show -s --format=\'%ae\'',
returnStdout: true
).trim()
echo "Git committer email: ${GIT_COMMIT_EMAIL}"
Another feature is the returnStatus option.
// Test commit message for flags
BUILD_FULL = sh (
script: "git log -1 --pretty=%B | grep '\\[jenkins-full]'",
returnStatus: true
) == 0
echo "Build full flag: ${BUILD_FULL}"
These options where added based on this issue.
See official documentation for the sh command.
For declarative pipelines (see comments), you need to wrap code into script step:
script {
GIT_COMMIT_EMAIL = sh (
script: 'git --no-pager show -s --format=\'%ae\'',
returnStdout: true
).trim()
echo "Git committer email: ${GIT_COMMIT_EMAIL}"
}
Current Pipeline version natively supports returnStdout and returnStatus, which make it possible to get output or status from sh/bat steps.
An example:
def ret = sh(script: 'uname', returnStdout: true)
println ret
An official documentation.
quick answer is this:
sh "ls -l > commandResult"
result = readFile('commandResult').trim()
I think there exist a feature request to be able to get the result of sh step, but as far as I know, currently there is no other option.
EDIT: JENKINS-26133
EDIT2: Not quite sure since what version, but sh/bat steps now can return the std output, simply:
def output = sh returnStdout: true, script: 'ls -l'
If you want to get the stdout AND know whether the command succeeded or not, just use returnStdout and wrap it in an exception handler:
scripted pipeline
try {
// Fails with non-zero exit if dir1 does not exist
def dir1 = sh(script:'ls -la dir1', returnStdout:true).trim()
} catch (Exception ex) {
println("Unable to read dir1: ${ex}")
}
output:
[Pipeline] sh
[Test-Pipeline] Running shell script
+ ls -la dir1
ls: cannot access dir1: No such file or directory
[Pipeline] echo
unable to read dir1: hudson.AbortException: script returned exit code 2
Unfortunately hudson.AbortException is missing any useful method to obtain that exit status, so if the actual value is required you'd need to parse it out of the message (ugh!)
Contrary to the Javadoc https://javadoc.jenkins-ci.org/hudson/AbortException.html the build is not failed when this exception is caught. It fails when it's not caught!
Update:
If you also want the STDERR output from the shell command, Jenkins unfortunately fails to properly support that common use-case. A 2017 ticket JENKINS-44930 is stuck in a state of opinionated ping-pong whilst making no progress towards a solution - please consider adding your upvote to it.
As to a solution now, there could be a couple of possible approaches:
a) Redirect STDERR to STDOUT 2>&1
- but it's then up to you to parse that out of the main output though, and you won't get the output if the command failed - because you're in the exception handler.
b) redirect STDERR to a temporary file (the name of which you prepare earlier) 2>filename (but remember to clean up the file afterwards) - ie. main code becomes:
def stderrfile = 'stderr.out'
try {
def dir1 = sh(script:"ls -la dir1 2>${stderrfile}", returnStdout:true).trim()
} catch (Exception ex) {
def errmsg = readFile(stderrfile)
println("Unable to read dir1: ${ex} - ${errmsg}")
}
c) Go the other way, set returnStatus=true instead, dispense with the exception handler and always capture output to a file, ie:
def outfile = 'stdout.out'
def status = sh(script:"ls -la dir1 >${outfile} 2>&1", returnStatus:true)
def output = readFile(outfile).trim()
if (status == 0) {
// output is directory listing from stdout
} else {
// output is error message from stderr
}
Caveat: the above code is Unix/Linux-specific - Windows requires completely different shell commands.
this is a sample case, which will make sense I believe!
node('master'){
stage('stage1'){
def commit = sh (returnStdout: true, script: '''echo hi
echo bye | grep -o "e"
date
echo lol''').split()
echo "${commit[-1]} "
}
}
For those who need to use the output in subsequent shell commands, rather than groovy, something like this example could be done:
stage('Show Files') {
environment {
MY_FILES = sh(script: 'cd mydir && ls -l', returnStdout: true)
}
steps {
sh '''
echo "$MY_FILES"
'''
}
}
I found the examples on code maven to be quite useful.
All the above method will work. but to use the var as env variable inside your code you need to export the var first.
script{
sh " 'shell command here' > command"
command_var = readFile('command').trim()
sh "export command_var=$command_var"
}
replace the shell command with the command of your choice. Now if you are using python code you can just specify os.getenv("command_var") that will return the output of the shell command executed previously.
How to read the shell variable in groovy / how to assign shell return value to groovy variable.
Requirement : Open a text file read the lines using shell and store the value in groovy and get the parameter for each line .
Here , is delimiter
Ex: releaseModule.txt
./APP_TSBASE/app/team/i-home/deployments/ip-cc.war/cs_workflowReport.jar,configurable-wf-report,94,23crb1,artifact
./APP_TSBASE/app/team/i-home/deployments/ip.war/cs_workflowReport.jar,configurable-temppweb-report,394,rvu3crb1,artifact
========================
Here want to get module name 2nd Parameter (configurable-wf-report) , build no 3rd Parameter (94), commit id 4th (23crb1)
def module = sh(script: """awk -F',' '{ print \$2 "," \$3 "," \$4 }' releaseModules.txt | sort -u """, returnStdout: true).trim()
echo module
List lines = module.split( '\n' ).findAll { !it.startsWith( ',' ) }
def buildid
def Modname
lines.each {
List det1 = it.split(',')
buildid=det1[1].trim()
Modname = det1[0].trim()
tag= det1[2].trim()
echo Modname
echo buildid
echo tag
}
If you don't have a single sh command but a block of sh commands, returnstdout wont work then.
I had a similar issue where I applied something which is not a clean way of doing this but eventually it worked and served the purpose.
Solution -
In the shell block , echo the value and add it into some file.
Outside the shell block and inside the script block , read this file ,trim it and assign it to any local/params/environment variable.
example -
steps {
script {
sh '''
echo $PATH>path.txt
// I am using '>' because I want to create a new file every time to get the newest value of PATH
'''
path = readFile(file: 'path.txt')
path = path.trim() //local groovy variable assignment
//One can assign these values to env and params as below -
env.PATH = path //if you want to assign it to env var
params.PATH = path //if you want to assign it to params var
}
}
Easiest way is use this way
my_var=`echo 2`
echo $my_var
output
: 2
note that is not simple single quote is back quote ( ` ).

execution of shell command from jenkinsfile

I am trying to execute set of commands from jenkinsfile.
The problem is, when I try to assign the value of stdout to a variable it is not working.
I tried different combinations of double quotes and single quotes, but so far no luck.
Here I executed the script with latest version of jenkinsfile as well as old version. Putting shell commands inside """ """ is not allowing to create new variable and giving error like client_name command does not exist.
String nodeLabel = env.PrimaryNode ? env.PrimaryNode : "slave1"
echo "Running on node [${nodeLabel}]"
node("${nodeLabel}"){
sh "p4 print -q -o config.yml //c/test/gradle/hk/config.yml"
def config = readYaml file: 'devops-config.yml'
def out = sh (script:"client_name=${config.BasicVars.p4_client}; " +
'echo "client name: $client_name"' +
" cmd_output = p4 clients -e $client_name" +
' echo "out variable: $cmd_output"',returnStdout: true)
}
I want to assign the stdout from the command p4 clients -e $client_name to variable cmd_output.
But when I execute the code the error that is thrown is:
NoSuchPropertyException: client_name is not defined at line cmd_output = p4 clients -e $client_name
What am I missing here?
Your problem here is that all the $ are interpreted by jenkins when the string is in double quotes. So the first 2 times there's no problem since the first variable comes from jenkins and the second time it's a single quote string.
The the third variable is in a double quote string, therefore jenkins tries to replace the variable with its value but it can't find it since it's generated only when the shell script is executed.
The solution is to escape the $ in $client_name (or define client_name in an environment block).
I rewrote the block:
String nodeLabel = env.PrimaryNode ? env.PrimaryNode : "slave1"
echo "Running on node [${nodeLabel}]"
node("${nodeLabel}"){
sh "p4 print -q -o config.yml //c/test/gradle/hk/config.yml"
def config = readYaml file: 'devops-config.yml'
def out = sh (script: """
client_name=${config.BasicVars.p4_client}
echo "client name: \$client_name"
cmd_output = p4 clients -e \$client_name
echo "out variable: \$cmd_output"
""", returnStdout: true)
}

To run "pylint" tool on multiple Python files on Windows Machine

I want to run "pylint" tool on multiple python files present under one directory.
I want one consolidated report for all the Python files.
I am able to run individually one python file, but want to run on bunch of files.
Please help with the command for the same.
I'm not a windows user, but isn't "pylint directory/*.py" enough ?
If the directory is a package (in the PYTHONPATH), you may also run "pylint directory"
Someone wrote a wrapper in python 2 to handle this
The code :
#! /usr/bin/env python
'''
Module that runs pylint on all python scripts found in a directory tree..
'''
import os
import re
import sys
total = 0.0
count = 0
def check(module):
'''
apply pylint to the file specified if it is a *.py file
'''
global total, count
if module[-3:] == ".py":
print "CHECKING ", module
pout = os.popen('pylint %s'% module, 'r')
for line in pout:
if re.match("E....:.", line):
print line
if "Your code has been rated at" in line:
print line
score = re.findall("\d+.\d\d", line)[0]
total += float(score)
count += 1
if __name__ == "__main__":
try:
print sys.argv
BASE_DIRECTORY = sys.argv[1]
except IndexError:
print "no directory specified, defaulting to current working directory"
BASE_DIRECTORY = os.getcwd()
print "looking for *.py scripts in subdirectories of ", BASE_DIRECTORY
for root, dirs, files in os.walk(BASE_DIRECTORY):
for name in files:
filepath = os.path.join(root, name)
check(filepath)
print "==" * 50
print "%d modules found"% count
print "AVERAGE SCORE = %.02f"% (total / count)

Figuring out pipes in python

i am currently writing a program in python and i am stuck. So my questtion is:
I have a program that reads a file and prints some lines to stdout like this:
#imports
import sys
#number of args
numArgs = len(sys.argv)
#ERROR if not enough args were committed
if numArgs <= 1:
sys.exit("Not enough arguments!")
#naming input file from args
Input = sys.argv[1]
#opening files
try:
fastQ = open(Input , 'r')
except IOError, e:
sys.exit(e)
#parsing through file
while 1:
#saving the lines
firstL = fastQ.readline()
secondL = fastQ.readline()
#you could maybe skip these lines to save ram
fastQ.readline()
fastQ.readline()
#make sure that there are no blank lines in the file
if firstL == "" or secondL == "":
break
#edit the Header to begin with '>'
firstL = '>' + firstL.replace('#' , '')
sys.stdout.write(firstL)
sys.stdout.write(secondL)
#close both files
fastQ.close()
Now i want to rewrite this program so that i can run a command line like : zcat "textfile" | python "myprogram" > "otherfile". So i looked around and found subprocess, but can't seem to figure out how to do it. thanks for your help
EDIT:
Now, if what you are doing is trying to write a Python script to orchestrate the execution of both zcat and myprogram, THEN you may need subprocess. – rchang
The intend is to have the "textfile" and the program on a cluster, so i dont need to copy any files from the cluster. i just want to login on the cluster and use the command:zcat "textfile" | python "myprogram" > "otherfile", so that the zcat and the program do their thing and i end up with "otherfile" on the cluster. hope you understand what i want to do.
Edit #2:
my solution
#imports
import sys
import fileinput
# Counter, maybe there is a better way
count = 0
# Iterieration over Input
for line in fileinput.input():
# Selection of Header
if count == 0 :
#Format the Header
newL = '>' + line.replace('#' , '')
# Print the Header without newline
sys.stdout.write(newL)
# Selection of Sequence
elif count == 1 :
# Print the Sequence
sys.stdout.write(line)
# Up's the Counter
count += 1
count = count % 4
THX
You could use fastQ = sys.stdin to read the input from stdin instead of the file or (more generally) fastQ = fileinput.input() to read from stdin and/or files specified on the command-line.
There is also fileinput.hook_compressed so that you don't need zcat and read the compressed file directly instead:
$ myprogram textfile >otherfile

How to provide flags to ruby script like -v , -o and place code accordingly?

I have created one ruby script that I want to run with some flags on console say -v flag prints output on console and -o stores output in new file with file name I am taking from console using gets()
My code has following structure:
puts "Enter filename to analyze:\n\n"
filename = gets().chomp
puts "Provide filename to store result in new text file:\n\n"
output = gets().chomp
filesize = File.size(filename)
puts "File size in Bytes:\n#{filesize.to_i}\n"
pagecontent = filesize - 20
puts "\n\nData:\n#{pagecontent}\n\n"
File.open(filename,'r') do |file|
#whole process with few do..end in between that I want to do in 2 different #ways.
#If I provide -v flag on console result of this code should be displayed on console
#and with -o flag it should be stored in file with filename provided on console #stored in output variable declared above
end
end
Use stdlib OptionParser

Resources