I'm trying to use mocha parallel flag (--parallel) with my tests. I would like tohow possible is it to find out which test file failed, in case of a failure. (for ex: test fileB failed)
Folder Structure would be like below
Login
- fileA
- fineB
- fileC
- fineD
NODE_TLS_REJECT_UNAUTHORIZED=0 ./node_modules/mocha/bin/mocha $(find test/api/v4/Login -name '*.js') --timeout 60000 --parallel --jobs 3
I have 4 test files inside 'Login' dir. If a test in fileB failed during the execution. is it possible to output which test file failed?
Related
I have created a pipeline which performs ansible-lint on $CI_PROJECT_DIR. The problem is the complete output is not shown in UI as compared to running on my local machine.
You can notice the difference in output for both.
Below is the output from my local machine (Ubuntu with ansible-lint installed)
**ansible-lint create-dir.yaml -v**
WARNING Overriding detected file kind 'yaml' with 'playbook' for given positional argument: create-dir.yaml
INFO Executing syntax check on create-dir.yaml (0.31s)
WARNING Listing 1 violation(s) that are fatal
syntax-check: 'file' is not a valid attribute for a Play
create-dir.yaml:4:3 [WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
ERROR! 'file' is not a valid attribute for a Play
The error appears to be in '/tmp/create-dir.yaml': line 4, column 3, but may be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
---
- name: Create a directory if it does not exist
^ here
Finished with 1 failure(s), 0 warning(s) on 1 files.
Below is output from Gitlab CI :
$ find ./ -not \( -name "*.ansible-lint" -o -name ".gitlab-ci.yml" \) \( -name "*yml" -o -name "*yaml" \) | xargs ansible-lint -v
WARNING Overriding detected file kind 'yaml' with 'playbook' for given positional argument: ./ansible/create-dir.yaml
INFO Executing syntax check on ansible/create-dir.yaml (0.39s)
Cleaning up project directory and file based variables 00:01
Job succeeded
I would like to know why there is difference in output and how to print the complete message on Gitlab CI
Job logs in GitLab CI/CD have a limited length. It's designed for operational reasons, as the jobs can create arbitrary outputs so one could - by mistake or by purpose - output even gigabytes or terabytes of text.
But if you scroll up in the output, you have the abillity to view the full logs by pressing 'Complete Raw'.
I am trying to add a validation step to a gitlab repo holding a single ansible role (with no playbook).
The structure of the role looks like :
.gitlab-ci.yml
tasks/
templates/
files/
vars/
handlers/
With the gitlab-ci looking like :
stages:
- lint
job-lint:
image:
name: cytopia/ansible-lint:latest
entrypoint: ["/bin/sh", "-c"]
stage: lint
script:
- ansible-lint --version
- ansible-lint . -x 106 tasks/*.yml
I need to skip the naming rule, thus ignoring rule 106.
Otherwise, I would like all files at the root repo to be checked. Since there is no playbook, lint has to be given the files that need to be checked... or at least, that is what I understoodd : I may have this point wrong. But anyway, if I give no name, lint does return ok but actually performs no check.
My problem is that I don't know how to tell him to check all the yaml in a recursive way, or even within a subdirectory. The above code returns an error :
ansible-lint: error: unrecognized arguments: tasks/deploy.yml tasks/localhost.yml tasks/main.yml tasks/managedata.yml tasks/psqlconf.yml
Any idea on how to check all the files from a subdirectory or through the whole role?
PS : I am using cytopia image for ansible-lint, but I have no problem using another, provided it's hosted on dockerhub.
You should certainly be able to pass multiple YAML files as arguments to ansible-lint. I have version 4.1.1a0, and I'm able to use it like this, for example:
anisble-lint -x 106 roles/*/tasks/*.yml
I notice that you seem to have placed a . before your -x 106; that looks like an error. It doesn't look like ansible-lint will accept a directory name as an argument (it doesn't cause it to fail; it just doesn't accomplish anything).
I've tried this both with a locally installed ansible-lint and using the cytopia/ansible-lint image, which appears to perform identically:
docker run --rm -v $PWD:/src -w /src cytopia/ansible-lint -x 106 roles/*/tasks/*.yml
If you want to check all the yaml files, you can use find with exec option, something like this:
find ./ -not -name ".gitlab-ci.yml" -name "*.yml" | xargs ansible-lint -x 106
However ansible-lint -x 106 ./ should work, are you sure that your role really has errors? I've tested it both on ansible-galaxy init generated roles (with meta and all that stuff) and roles which were containing only tasks directory, and it worked every time.
EDIT: I tried creating an error in existing role, replacing "present" with "latest" in package install task
$ ansible-galaxy install geerlingguy.nfs
$ cd ~/.ansible/roles/geerlingguy.nfs
$ sed -i "s/present/latest/g" tasks/setup-RedHat.yml
$ ansible-lint ./
Examining tasks/main.yml of type tasks
Examining tasks/setup-Debian.yml of type tasks
Examining tasks/setup-RedHat.yml of type tasks
Examining handlers/main.yml of type handlers
Examining meta/main.yml of type meta
[403] Package installs should not use latest
tasks/setup-RedHat.yml:2
Task/Handler: Ensure NFS utilities are installed.
and it actually worked, so you may want to run a verbose output to see if actually works, maybe individual yaml file rules are different from whole roles.
When i ran my find-based check i got a lot of extra [204] Lines should be no longer than 160 chars
I'm trying to use a Slurm-operated cluster to run LS-Dyna (a finite-element simulation program with a limited number of licenses available on my cluster). I am trying to write my batch scripts so that I do not waste processing time due to this license limit (as well as to improve legibility when running 'squeue' commands) by using job arrays -but I'm having trouble making that work.
I want to run identical Bash scripts in a variety of FEM meshes, each of which I have organized into different subfolders.
Given this folder structure on my cluster...
cluster root
|
...
|
|-+ my scratch space's root
|
|-+ this project
|
|--+ lat_-5mm
| |- runCurrentLine.bash
| |- other files
|
|--+ lat_-4.75mm
| |- runCurrentLine.bash
| |- other files
|
|--+ lat_-4.5mm
| |- runCurrentLine.bash
| |- other files
|
...
|
|--+ lat_5mm
| |- runCurrentLine.bash
| |- other files
|
|
|-sendDynaRuns.bash
|-other dependencies
...I'm trying to submit "runCurrentLine.bash" in each folder by running the following script in my login node.
#!/bin/bash
iter=0
for foldernow in */; do
# change to subdirectory for current line iteration
cd "./${foldernow}";
# make Slurm and user happy
echo "sending LS Dyna simulation for ${pos}mm line..."
sleep 1
# first line only: send batch, and get job ID
if [ "${iter}" == 0 ];then
# send the batch...
jobID=$(sbatch -J "Dyna" --array="${iter}"%15 runCurrentLine.bash)
# ...ensure that Slurm's output shows on console (which includes the job ID)...
echo "${jobID}"
# ...and extract the job ID and save as a variable
jobID=$(echo "${jobID}" | grep -Eo '[+-]?[0-9]+([.][0-9]+)?')
# subsequent lines: add current line to job array
else
scontrol update --jobid="${jobID}" --array="${iter}"%15 runCurrentLine.bash
fi
# prepare to move onto next position
iter=$((iter+1))
cd ../
done
This setup properly sends the batch job for the first line, at -0.25mm*. However, for the second line onwards, it doesn't seem to do the same thing... This is what I end up getting on my console:
*: I intended the "lat_xmm" folders to be numerically ordered, but Unix doesn't seem to recognize that
$ ./sendDynaRuns.bash
sending LS Dyna simulation for -0.25mm line...
Submitted batch job 1081040
sending LS Dyna simulation for 0.25mm line...
sbatch: error: Batch job submission failed: Invalid job id specified
sending LS Dyna simulation for -0.5mm line...
sbatch: error: Batch job submission failed: Invalid job id specified
I know that runCurrentLine.bash runs just fine if I manually send it as a batch (and it runs to completion within the time limit I specified in-file, mainly since it doesn't have to compete with other lines for open licenses). What should I do to be able to get my code to work?
Thank you in advance!
As state by #Poshi, you cannot add jobs to an existing array.
I would create a submission script like this one:
#!/bin/bash
#SBATCH --array=1-<nb of folders>%15
# ALL OTHER SLURM SBATCH DIRECTIVES HERE
folders=(lat_*)
foldernow=${folders[$SLURM_TASK_ARRAY_ID]}
cd $foldernow && ./runCurrentLine.bash
The only drawback is that you need setup explicitly the number of jobs the array based on the number of folders.
I have multiple log files in a directory /home/user/ with pattern x.log, y.log, z.log :
content of files are :
error
pass
fail
executed
not executed
Summary:
test 1
test 2
test 3
Finished in 2682 min 43.9 sec.
done
completed
i want output in a new single file from multiple log files as:
Summary:
test 1
test 2
test 3
Finished in 2682 min 43.9 sec.
Summary:
test 1
test 2
test 3
Finished in 2682 min 43.9 sec.
Summary:
test 1
test 2
test 3
Finished in 2682 min 43.9 sec.
Can you help me out with shell script
You can use awk:
awk '/Summary/ {run=1} run==1 {print} /Finished/ {run=0}' *.log > log.agr
This will take the contents of every file ending with .log, start writing to log.agr when it finds a line containing Summary, and then skip lines after a line containing Finished. It'll repeat that through the entire contents of all the *.log files.
Can I run multiple Test Cases from multiple scripts but have a single output that either says "100% Pass" or "X Failed" and lists out the failed tests?
For example I want to see something like:
>runtests.rb all #runs all the scripts in the directory
Finished in 4.523 Seconds
100% Pass
>runtests.rb category #runs all the scripts in a specified sub-directory
Finished in 2.1 Seconds
2 Failed:
test_my_test
test_my_test_2
1 Error:
test_my_test_3
I use the built-in MiniTest::Unit along with the autotest command that is part of ZenTest and get output like:
autotest
/Users/tinman/.rvm/rubies/ruby-1.9.2-p290/bin/ruby -I.:lib:test -rubygems -e "%w[test/unit tests/test_domains.rb tests/test_regex.rb tests/test_vlan.rb tests/test_nexus.rb tests/test_switch.rb tests/test_template.rb].each { |f| require f }"
Loaded suite -e
Started
........................................
Finished in 0.143375 seconds.
40 tests, 276 assertions, 0 failures, 0 errors, 0 skips
Test run options: --seed 62474
Is that similar to what you are talking about?