add job number to the build number in the project - shell

I am using shell script and need to add the jenkins job number which is running currently to the BUILD NUMBER of the ios project
I am using plistbuddy to fetch the build number. I have got both the build number and job number with me but i am unable to add them. It is showing me error again n again
Script is something like this:
echo "BUILD_NUMBER is = "$BUILD_NUMBER
OUTPUT = $(/usr/libexec/PlistBuddy -c "Print CFBundleVersion" Abc-Info.plist)
OUTPUT1 = $((OUTPUT + BUILD_NUMBER))
echo "$OUTPUT1"
image
Please help me what am i doing wrong

Related

how to pass specific arguments in batch commands during jenkins builds

I'm trying to use Jenkins to automate performance testing with JMeter,
each build is a single JMeter test and I want to increase the number of users(threads) for each Jenkins build if the previous was successful.
I have configured most of the build, with SSH plugin I can restart Tomcat, copy catalina.out, with publishing performance I can open the .jtl file and determine if the build was successful.
What I want is to execute a different batch command for the next build(to increase the number of users(threads) and user id's)
For example:
jmeter -Jthreads=10 -n -t C:\TestScripts\script.jmx -l C:\TestScripts\Jenkins.jtl
jmeter -Jthreads=20 -n -t C:\TestScripts\script.jmx -l C:\TestScripts\Jenkins.jtl
jmeter -Jthreads=30 -n -t C:\TestScripts\script.jmx -l C:\TestScripts\Jenkins.jtl...
Is there some good jmeter plugin some counter that i can use to increase some variable by 10 each time:
jmeter -Jthreads=%variable1%...
I have tried by setting environmental variables and then incrementing that variable by:
"SET /A thread+=10"
but it doesn't change that variable because jenkins opens its own CMD, a new process :
("cmd /c call C:\WINDOWS\TEMP\jenkins556482303577128680.bat")
Use the following SET command to increase threads variable by 10:
SET /A threads=threads+10
Or inside double quotes:
SET /A "threads+=10"
Not knowing your Jenkins configuration, and which plugins you have installed and how do you run the test it is quite hard to come up with the best solution.
The only "universal" workaround I can think of is writing the current number of threads into a file in Jenkins workspace and reading the value of threads from the file on next execution.
Add setUp Thread Group to your Test Plan
Add JSR223 Sampler to your Thread Group
Put the following Groovy code into "Script" area:
import org.apache.jmeter.threads.ThreadGroup
import org.apache.jorphan.collections.SearchByClass
import org.apache.commons.io.FileUtils
SampleResult.setIgnore()
def file = new File(System.getenv('WORKSPACE') + System.getProperty('file.separator') + 'threads.number')
if (file.exists()) {
def newThreadNum = (FileUtils.readFileToString(file, 'UTF-8') as int) + 10
FileUtils.writeStringToFile(file, newThreadNum as String)
def engine = ctx.getEngine()
def test = org.apache.commons.lang.reflect.FieldUtils.getField(engine.getClass(), 'test', true)
def testPlanTree = test.get(engine)
SearchByClass<ThreadGroup> threadGroupSearch = new SearchByClass<>(ThreadGroup.class)
testPlanTree.traverse(threadGroupSearch)
def threadGroups = threadGroupSearch.getSearchResults()
threadGroups.each {
it.setNumThreads(newThreadNum)
}
} else {
FileUtils.writeStringToFile(file, props.get('threads'))
}
The code will write down the current number of threads in all Thread Groups into a file called threads.number in Jenkins Workspace and on subsequent runs it reads the value from it, adds 10 and writes it back.
For now i am creating 20 .jmx files (1.jmx, 2.jmx , 3.jmx ...) each whith a different number of users. and calling them whit this command :
jmeter -n -t C:\TestScripts\%BUILD_NUMBER%.jmx -l C:\TestScripts\%BUILD_NUMBER%.jtl
the first billd will call 1.jmx the second 2.jmx ...
it isn't the best method but it works for now. I will try your advice over the weekend when i have more time.
i have found the a solution that works for me, it inst pretty. I created a python script which changes a .CVS fil from which JMeter reads the number of threads and the starting user id. This python script incremets the starting user id by the number of threads in the previous bild and the number of threads by 10
file = open('C:\\Users\\mp\\AppData\\Local\\Programs\\Python\\Python37-32\\eggs.csv', 'r')
a,b=file.readlines()[0].split(",")
print(a,b)
b=int(b)
a=int(a)
b=a+b
a=a+10
print(a,b)
f = open("C:\\Users\\mp\\AppData\\Local\\Programs\\Python\\Python37-32\\eggs2.csv", "a")
f.write(str(a)+","+str(b))
f.close()
I have python on my pc and a i am calling the script in Jenkins as a windows Bach command
C:\Users\mp\AppData\Local\Programs\Python\Python37-32\python.exe C:\Users\mp\AppData\Local\Programs\Python\Python37-32\rename_write_file.py
I am much better in python than Java so I implemented this in Python.
So for each new test,the CSV file from which jmeter reads values is changed.

Shell Scripting to compare the value of current iteration with that of the previous iteration

I have an infinite loop which uses aws cli to get the microservice names, it's parameters like desired tasks,number of running task etc for an environment.
There are 100's of microservices running in an environment. I have a requirement to compare the value of aws ecs metric running task for a particular microservice in the current loop and with that of the previous loop.
Say name a microservice X has the metric running task 5. As it is an infinite loop, after some time, again the loop come for the microservice X. Now, let's assume the value of running task is 4. I want to compare the running task for currnet loop, which is 4 with the value of the running task for the previous run, which is 5.
If you are asking a generic question of how to keep a previous value around so it can be compared to the current value, just store it in a variable. You can use the following as a starting point:
#!/bin/bash
previousValue=0
while read v; do
echo "Previous value=${previousValue}; Current value=${v}"
previousValue=${v}
done
exit 0
If the above script is called testval.sh. And you have an input file called test.in with the following values:
2
1
4
6
3
0
5
Then running
./testval.sh <test.in
will generate the following output:
Previous value=0; Current value=2
Previous value=2; Current value=1
Previous value=1; Current value=4
Previous value=4; Current value=6
Previous value=6; Current value=3
Previous value=3; Current value=0
Previous value=0; Current value=5
If the skeleton script works for you, feel free to modify it for however you need to do comparisons.
Hope this helps.
I dont know how your input looks exactly, but something like this might be useful for you :
The script
#!/bin/bash
declare -A app_stats
while read app tasks
do
if [[ ${app_stats[$app]} -ne $tasks && ! -z ${app_stats[$app]} ]]
then
echo "Number of tasks for $app has changed from ${app_stats[$app]} to $tasks"
app_stats[$app]=$tasks
else
app_stats[$app]=$tasks
fi
done <<< "$( cat input.txt)"
The input
App1 2
App2 5
App3 6
App1 6
The output
Number of tasks for App1 has changed from 2 to 6
Regards!

Store the log output into a file with cmdenv-output-file

I need to recover the content of the show log module of Omnet++/Tkenv into a file, I added in the omnetpp.ini:
cmdenv-express-mode = false
cmdenv-output-file = log.txt
but I have two types of problems:
1) after the simulation, I did not find the "log.txt" If I do not create it
2) and when I created it before launching the simulation under ../omnetpp-4.6/log.txt also I find it empty
I used EV << to display the content of variables that I used, I need to resolve this problem in order to analyze the traffic so how can I do that please?
You have to start your simulation in Cmdenv mode. To do that go to Run | Run Configurations | select your configuration, then select Command line as User interface. The log file is created in simulations directory by default.

Running parallel simulations (using the Command Line)

How can I run the simulation with different configurations? I am using omnet++ version 4.6.
My omnetpp.ini file looks as below :
[General]
[Config Dcn2]
network = Dcn2
# leaf switch
#**.down_port = 2
**.up_port = 16 #12 # 4
# spine switch
**.port = 28 # 20 #2048
# crossconnect
**.cross_down_port = 28 # 20 #2048
**.cross_up_port = 28 # 20 #2048
# to set destination of packet
**.number_leaf_switch = 28 # 20 #2048
# link speed
#**.switch_switch_link_speed = 40 Mbps
**.interArrivalTime = ${exponential(.0001),exponential(0.0002),exponential(0.0003)}
**.batch_length = 10
**.buffer_length = 10
sim-time-limit = 1000s
I want to run the code with different values of interArrivalTime. But I can neither run with different configs (one after another), nor can I run individual runs in parallel on separate cores.
I have tried with cmdev option in run configurations but the different runs doesn't show up apart from the 1st one. When I try mentioning the number of processes to be more than one then also only the first run gets simulated. I really cannot find out the reason.
Config Examinataion
In your case you can perform config examination. OMNeT++ offers different options for that. They are explained under the Parameter Studies section of the OMNeT++ manual.
So you can try one of the following options to examine your configs and thus config file:
./run –a - will show all the configurations in the omnet.ini
./run -x <config_name> - will give more info about a specific config
./run -x <config_name> -g - see all the combinations of configs
First you will have to navigate to your example folder, and there execute one of the aforementioned commands.
I executed: ./run -x Dcn2 -g and got the following resuls
OMNeT++ Discrete Event Simulation (C) 1992-2014 Andras Varga, OpenSim Ltd.
Version: 4.6, build: 141202-f785492, edition: Academic Public License -- NOT FOR COMMERCIAL USE
See the license for distribution terms and warranty disclaimer
Setting up Tkenv...
Config: Dcn2
Number of runs: 3
Run 0: $0=exponential(.0001), $repetition=0
Run 1: $0=exponential(0.0002), $repetition=0
Run 2: $0=exponential(0.0003), $repetition=0
End.
This confirms indeed that you have 3 different runs for the simulation parameter you are trying to modify. However, variable name you are using for the interArrivalTime parameter is assigned to $0 by default because you have not specified it.
If you change the following line in your config:
**.interArrivalTime = ${exponential(.0001),exponential(0.0002),exponential(0.0003)}
to
**.interArrivalTime = ${interArrivalTime = exponential(0.0001),exponential(0.0002),exponential(0.0003)}
you will get a more descriptive output for ./run -x Dcn2 -g
Running different runs of a config:
Next step for you would be to run the different runs for your config. You can do that by navigating to your example directory and execute:
./run -c <config-name> -r <run-number> -u Cmdenv
Note that the <config-name> would be Dcn2 for you, and the -r specifies which of the runs given above you would like to execute.
In other words you can open three terminal windows and navigate to your example directory and do:
./run -c Dcn2 -r 0 -u Cmdenv - for interArrivalTime = exponential(0.0001)
./run -c Dcn2 -r 1 -u Cmdenv - for interArrivalTime = exponential(0.0002)
./run -c Dcn2 -r 2 -u Cmdenv - for interArrivalTime = exponential(0.0003)
Distinguishing Different run results
To be able to distinguish between the output result files of the different runs for your given config you can modify the default name of the output file.
The "how-to" is given in the 12.2.3 Result File Names section of the OMNeT++ manual.
output-vector-file = "${resultdir}/${configname}-${runnumber}.vec"
output-scalar-file = "${resultdir}/${configname}-${runnumber}.sca"
As you can see by default your output files will be distinguished by the ${runnumber} variable. You can further improve it by adding the interArrivalTime to the output file name.
Example:
output-scalar-file = "${resultdir}/${configname}-${runnumber}-IAtime=${interArrivalTime}.sca/vec"
I have not tested the final approach. So you might get some error along the path.

Invalid job array specification in slurm

I am submitting a toy array job in slurm. My command line is
$ sbatch -p development -t 0:30:0 -n 1 -a 1-2 j1
where j1 is script:
#!/bin/bash
echo job id is $SLURM_JOB_ID
echo array job id is $SLURM_ARRAY_JOB_ID
echo task id id $SLURM_ARRAY_TASK_ID
When I submit this, I get an error:
--> Verifying valid submit host (login1)...OK
--> Verifying valid jobname...OK
--> Enforcing max jobs per user...OK
--> Verifying availability of your home dir (/home1/03400/myname)...OK
--> Verifying availability of your work dir (/work/03400/myname)...OK
--> Verifying availability of your scratch dir (/scratch/03400/myname)...OK
--> Verifying valid ssh keys...OK
--> Verifying access to desired queue (development)...OK
--> Verifying job request is within current queue limits...OK
--> Checking available allocation (PRJ-1234)...OK
sbatch: error: Batch job submission failed: Invalid job array specification
The same job works fine without the array specification:
$ sbatch -p development -t 0:30:0 -n 1 j1
This post is a bit old, but in case it happens for other people, I have had the same issue but the accepted answer did not suggest what was the problem in my case.
This error (sbatch: error: Batch job submission failed: Invalid job array specification) can also be raised when the array size is too large.
From https://slurm.schedmd.com/slurm.conf.html
MaxArraySize
The maximum job array size. The maximum job array task index value will be one less than MaxArraySize to allow for an index value of zero. Configure MaxArraySize to 0 in order to disable job array use. The value may not exceed 4000001. The value of MaxJobCount should be much larger than MaxArraySize. The default value is 1001.
To check the value, the slurm.conf file should be accessible by all slurm users (still according to 1) and may be found somewhere near /etc/slurm.conf (see https://slurm.schedmd.com/slurm.conf.html#lbAM, in my case I found it at path /etc/slurm/slurm.conf).
The syntax for your array specification is correct. But the printout you paste is not standard Slurm, I guess you are working on Stampede ; they have their own sbatch wrapper.
What you could do is use the -vvv option to sbatch to see exactly what Slurm sees:
$ sbatch -vvv -p development -t 0:30:0 -n 1 -a 1-2 j1 |& grep array
This should return
sbatch: array : 1-2
and if it does not it means the information is somehow lost somewhere.
What you can try is remove the array specification from the submission command line and insert it in the submission script, like this:
$ sbatch -p development -t 0:30:0 -n 1 j1
with j1 being
#!/bin/bash
#SBATCH -a 1-2
echo job id is $SLURM_JOB_ID
echo array job id is $SLURM_ARRAY_JOB_ID
echo task id id $SLURM_ARRAY_TASK_ID
The next step is to contact the system administrators with the information you will get from running the above tests and ask for help.

Resources