I need to pull certain numbers from the console output of a Jenkins build, and then plot that data on a graph. If my output is:
+ echo -153
-153
+ echo master count: 13596
master count: 13596
Finished: SUCCESS
I want to pull the master count and -153. The master count is total number of errors while the -153 is the change of errors from two builds. I then want to make a graph using those 2 numbers.
So my question is, how do I send those two sets of data from the console to a graph in jenkins? The numbers will change over time and I wish to be able to see the trend in errors.
Assuming that the shell code you listed above is under your control, the easiest way to do this is to echo the output to a CSV file instead of / as well as to the console, and then use the Jenkins Plot Plugin to display the results.
This is exactly what the Plot Plugin is for.
You need to change your shell build step (or other part of the build) to create
a separate file for each value that you want to plot.
They need to be of the form:
YVALUE=<value>
In your example you would need a file "mastercount.txt" with:
YVALUE=13596
and another file called "diffcount.txt" with:
YVALUE=-153
Then under post-build actions you need to configure the plot-plugin to pick up these files and generate plots.
Related
Im trying to develop my own flow manager and even if I'm not fully familiar with Ansible, it looks like it can do the job.
I'd like to evaluate a part of concept with you and to understand if it is doable in Ansible or not. So rather than asking for a solution Im asking for suggestions about architecture.
Here are the requirements:
Flow executes on one machine.
Flow should be divided on arbitrary number of steps (it depends on project requirements) that can be executed sequentially or in parallel. Eg.
- step_0
- step_1
- step_2
step_3
- step_4
step_5
Here step_0 should be executed first, and once it is done step_1 should be launched. Having done step_1, steps 2 and 3 should start in parallel and when both of them are done, steps 4 and 5 should be run, again in parallel
Every step should be a logical wrapper around arbitrary number of commands. Eg. step_0 can execute script that makes directory skeleton, followed by commands for setting ENV VAR, followed by commands for linking. Then step_1 starts with new logical unit etc.
For every step I would like to have common generic callbacks before and after step execution. Callback requirements (again eg. for step_0):
pre_exe callback:
- create flag files:
step_0.START
step_0.RUNNING
- create log file step_0.log and redirect output content of step_0 to step_0.log
post_exe callback
- delete step_0.RUNNING
- create flag file step_0.DONE
- grep step_0.log for failing_signature (one or more strings - fail, error etc)
- grep step_0.log for passing_signature (few strings - pass, script_finished_successfully etc)
- based on results of grepping create flag files step_0.PASS (in case !FSIG & PSIG) or step_0.FAIL (in any other case)
- if step_0.FAIL is created terminate flow execution
Generally it would be good to have PSIG and FSIG, configurable on step level, but I can imagine it with hard-coded strings for all steps.
I would be happy if somebody can confirm if it is doable in Ansible or not, and if it is, to suggest high level architecture, so that I can focus my attention.
I need the shell script to pull the .dat file from source server to SFTP server.
Every time the job runs, shell script has to verify if the table already exists in sftp server and get all the files corresponding to that table with date greater than the existing file. (file comparison is required based on the date in the filename).
Example: Yesterday, job ran and file "table1_extract_20190101.dat" is extracted. And in source server, I have 2 files "table1_extract_20190102.dat", "table1_extract_20190103.dat". Then it has to get both the files and so on for each and every table.
Please suggest on how this could be implemented.
Thanks
Use Ab Initio SFTP To component.
Ideally, add it at the end of the graph that creates the files, so all handling is in one place. The SFTP To component(s) would run in a new phase after the files are written.
Or, create another Ab Initio graph that looks for filenames based on the filename specification used to generate the original filenames. One risk is being sure the files have been written completely, which is why it is ideal to do it in the original graph. You would need to schedule this graph to run after the first graph is complete. A good way to do that is with a plan. Another way using Control>Center is to schedule this job after the previous one completes by adding a job dependency.
When I debug and use JMeter GUI with listener(s),
I sometimes want to remove certain outputs, as the last 5, or failed ones, but keep all other outputs
Can I select output to be clear ? because I can't mark using Ctrl some outputs and I don't want to clear all output
Well, there is no option to remove certain output from the specified listeners what you can do is, you can write the result set in csv sheet from the listener and manipulate the result set according to your need.
This is the simple solution that you can try for your problem.
I'm new to trying out snakemake (last week or so) in order to handle less of the small details for workflows, previously I have coded up my own specific workflow through python.
I generated a small workflow which among the steps would use Illumina PE reads and ran Kraken against them. I'd then parse the output of the Kraken output to detect the most common species (within a set of allowable) if a species value wasn't provided (running with snakemake -s test.snake --config R1_reads= R2_reads= species=''.
I have 2 questions.
What is the recommended approach given the dynamic output/input?
Currently my strategy for this is to create a temp file which
contains the detected species and then cat {input.species} it into
other shell commands. This doesn't seem elegant but looking through
the docs I couldn't quite find an adequate alternative. I noticed
PersistentDicts would let me pass variables between run: commands
but I'm unsure if I can use that to load variables into a shell:
section. I also noticed that wrappers could allow me to handle it
however from the point I need that variable on I'd be wrapping the
remainder of my workflow.
Is snakemake the right tool if I want to use the species afterwards to run a set of scripts specific to the species (with multiple species specific workflows)?
Right now my impression on how to solve this problem is to have multiple workflow files for the species and have a run with switch which calls the associated species workflow dependant on the species.
Appreciate any insight on these questions.
-Kim
You can mark output as dynamic (e.g. expecting one file per species). Then, Snakemake will determine the downstream DAG of jobs after those files have been generated. See http://snakemake.readthedocs.io/en/stable/snakefiles/rules.html#dynamic-files
I have a build-server, implemented with team-city.
Until now, I have an input parameter that represent the first 3 parts of the version number (x.y.z) = %Version.Number%
When I compile my exe files, I set the file version to be:
%Version.Number%.%build_number% and then I get a 4 parts version number.
The problem with that solution is that there is no connection between the first 3 parts of the version number and the build number.
Now, I want to find a way to have a different set of %build_number% for each %Version.Number%.
I will illustrate the problem with an example:
On the first build when %Version.Number% = 15.3.2 - the version number will be 15.3.2.0 .
On the second build when %Version.Number% = 15.3.2 - the version number will be 15.3.2.1 .
Now, on a new build when %Version.Number% = 16.0.0 - the version number will be 16.0.0.2
and I want to be 16.0.0.0.
Thanks.
Two ways to handle this, at least:
Use the Version Number Plugin -
it will allow you to reset the "running" build number whenever you like -
simply set the next-build-number to '1' whenever the major release is increased
(as a bonus, it also lets you format the version-number with leading-zeroes and such).
Create a new job whenever you increase the major release number -
copy build_job_15.3.2 to build_job_16.0.0
edit the version-number in build_job_16.0.0 to be '16.0.0'
optional: disable build_job_15.3.2
Now you can run build_job_16.0.0, and the build number will start with '1'
(this method is a bit tedious, but allows you to continue building 15.3.2 releases, if needed).
You are able to reset the build number counter in the General Settings of the Build Configuration. Is that not sufficient for your scenario?