Global variable that count number of errors - ansible

Im new in Ansible. Im using playbooks/roles in Openstack and im searching for a Global variable or similar that count the number of errors in the playbook or at least that an error ocurred in some moment.
This is why (let say that this variable is called GLOB):
In Task:
---
# tasks file
- name: testing block
block:
# List of tests to run in this test suite:
- include: ../tests/DoThings.yml #API calls, http 50x possible errors: YES
always: #Final task always executed
- include: ../tests/Clear.yml #API calls, http 50x possible errors: YES
- include: ../tests/Report.yml #NOT API calls, http 50x possible errors: NO, just log checker.
So if some error ocurred in DoThings.yml i want to clear all the thing with Clear.yml and AFTER execute Report.yml, inside this i will check if GLOBAL var has 1 failed (one or more failed). This is because a failed can be
A "50x HTTP", that is impossible to predict, you have to try again (this is important to detect for me and differentiate from other errors)
A normal error for code mistake or similar, easy to fix (not important in this case)

Related

why is the ansible sros_command module responding with an error from a role, but works independently

I am attempting to send a command to an SROS device using an ansible role. The task itself is:
- name: invoke the sros cli
sros_command:
commands: ["{{item.input}}"]
register: sros_command_result
This command is being run inside a loop of several commands. I know that the module will allow you to send multiple commands at once, but I need to do additional processing on each command so its simpler to handle them individually. I've verified that item.input is correct, and the notation is sending the command as a list which is what the module wants for input.
In the case I am testing the command itself is show chassis.
I have verified that I am connected to the device, and an independent debug run of the module only generates the correct response from the device.
When I run this via my role though, it response with: "Unable to decode JSON from response to exec_command('{\"command\": \"show chassis\", \"prompt\": null, \"answer\": null}'). Received 'None'."
Im very lost. I do not know why this error is appearing (other than the device not sending a response), nor can I figure out any way to debug this.

What is the easiest way to make Azure pipeline stage fail for debugging purposes

I would like to make a given stage fail to test my conditions
- stage: EnvironmentDeploy
condition: and(succeeded()...)
is it possible to make - stage to fail purposefully?
The easiest way I can think of is adding a job with a PowerShell script, and using the throw keyword to exit the script with an error:
stages:
- stage: StageToFail
jobs:
- job: JobToFail
steps:
- pwsh: throw "Throwing error for debugging purposes"
I assume you don't want this fail to happen consistently as that would make deployment impossible. Assuming it's a one-off thing, why not try to upload code with compile errors so the build fails or - if your code base has them and your pipeline checks them as a prerequisite for deployment - add a unit test that fails.

How do I mark the concourse ci build failed if the tests failed?

I am running some automated tests using concourse ci pipeline. My requirement is to mark the build as failed if any of the tests failed and email the results. Is there a way to do this in concourse? The email feature is working fine, but the build passes even with the test failures.
Under the assumption that the exit code is correct you will need to use the on failure step on concourse and add that to your job, it will look something like this
jobs:
- name: myBuild
plan:
- get: your-repo
passed: []
trigger: false
- task: run-tests
file: runMyTestsTask.yml
on_failure:
- put: send-an-email
params:
subject_text: "Your email subect i.e Failed Build"
body_text: "Your message when the build has failed"
on_success:
put: push-my-build
## Define your additional resources here
resources:
- name: send-an-email
type: email
source:
smtp:
host: smtp.example.com
port: "587" # this must be a string
username: a-user
password: my-password
from: build-system#example.com
to: [ "dev-team#example.com", "product#example.net" ] #optional if `params.additional_recipient` is specified
resource_types:
- name: email
type: docker-image
source:
repository: pcfseceng/email-resource
Additionally if you need to output some relevant information about the build, you can do so by including some environment vars that will wrap the concourse metadata and you can include that into the body of the email message, for more details on how to do this, please refer to documentation of the email resource here: https://github.com/pivotal-cf/email-resource.
If you're running JMeter in command-line non-GUI mode your command needs to return non-zero exit status code
Even if there are failures in your test the JMeter process has 0 exit code therefore any CI system will treat the execution as successful:
If you're looking for a JMeter-only solution you can add a JSR223 Assertion to your Test Plan and put the following code into "Script" area:
if (!prev.isSuccessful()) {
props.put('failure', 'true')
}
Then add tearDown Thread Group to your Test Plan and put JSR223 Sampler there with the following code:
if (props.get('failure').equals('true')) {
System.exit(1)
}
if any Sampler in the JSR223 Assertion Scope fails - the whole JMeter process will finish with exit code 1 which is treated as error by any upstream processing system.
Another option is considering using Taurus tool as the wrapper for your JMeter test, Taurus provides flexible pass/fail criteria subsystem which allows you possibility to define thresholds for considering test successful or not. If the thresholds are exceeded - Taurus will return non-zero exit code which should be "understood" by the concourse (or whatever else software)

Intermittently breaking tests in my hand-rolled Sinatra app. Related to file processing?

Summary: After adding logic to save user account data, my code seems to work fine and sometimes all my (many) tests pass. But sometimes they fail seemingly randomly, with /tmp test files not being deleted during testing.
In my hand-rolled Ruby/Sinatra "to do list" program, I added user accounts and can now save data to user files (.yml format) as well as tmp files for people who aren't logged in. Yay!
As far as I can tell, the code works fine. All tests pass...but only sometimes. Sometimes, the tests related to my new file processing methods fail. Here's a sample:
# Running:
....EF..........................
Finished in 3.930466s, 8.1415 runs/s, 53.1744 assertions/s.
1) Error:
ToDoTest#test_post_newtask:
Errno::EACCES: Permission denied # unlink_internal - tmp/1.yml
C:/Users/user/Dropbox/_Programming/Ruby/learning_projects/todo/test/test_todo.rb:404:in `delete'
C:/Users/user/Dropbox/_Programming/Ruby/learning_projects/todo/test/test_todo.rb:404:in `block (2 levels) in teardown'
C:/Users/user/Dropbox/_Programming/Ruby/learning_projects/todo/test/test_todo.rb:404:in `each'
C:/Users/user/Dropbox/_Programming/Ruby/learning_projects/todo/test/test_todo.rb:404:in `block in teardown'
2) Failure:
ToDoTest#test_get_deleted [C:/Users/user/Dropbox/_Programming/Ruby/learning_projects/todo/test/test_todo.rb:167]:
Expected false to be truthy.
32 runs, 209 assertions, 1 failures, 1 errors, 0 skips
rake aborted!
Command failed with status (1): [ruby -I"lib" -I"C:/Ruby23/lib/ruby/gems/2.3.0/gems/rake-10.4.2/lib" "C:/Ruby23/lib/ruby/gems/2.3.0/gems/rake-10.4.2/lib/rake/rake_test_loader.rb" "test/test_task.rb" "test/test_task_store.rb" "test/test_todo.rb" "test/test_todo_helpers.rb" "test/test_users.rb" ]
Tasks: TOP => default => test
(See full trace by running task with --trace)
This is only a sample, because sometimes many more tests fail or have errors. It's weirdly random. I noticed that my tests, which result in a lot /tmp files being made and deleted very rapidly, sometimes failed to delete some files, and as a result some would be left behind. If I reran my tests when there were some undeleted files in /tmp, there would be even more (again, random) errors.
One common error I saw, which I never saw before adding the new file processing commands, is this one: Errno::EACCES: Permission denied # unlink_internal. I looked this up on SO but there seems to be only (irrelevant-seeming) Rails stuff. This is a Sinatra program running on Windows. So could I replicate the tests in my Ubuntu VM? Yes I could. Precisely the same sort of error pattern.
Anyway, I suspected that system commands were not finishing before execution continued. But apparently not. I tried putting "sleep 2" after all my system commands, and I still got a random failing test and cruft left in /tmp. I also tried using threads, which I have never used before, like this:
delr = Thread.new do
File.delete(#store.path) # seems to help to add this here...
end
delr.join
But that didn't help.
One other thing...I'm teaching myself and this is probably not the way it's supposed to be done, but...all of my get methods are preceded by a check of my session[:id] variable to see if the user is logged in, and to see if the correct datafile is loaded. I don't know if that's relevant but it might be.
Any ideas on what the problem could be or how to fix it?

Chef - finding the missing attribute NilClass

Context - We have a massive amount of Chef attributes to perform our install, something like 3000+ have now been defined and change per environment.
Problem - Sometimes a Chef recipe will reference a non-existent attribute node[:mystuff][:typo]. This results in the following error:
Recipe Compile Error in /var/chef/cache/cookbooks/<yyy>/recipes/something.rb
undefined method '[]' for nil:NilClass
This is a worthless error because it doesn't let me know exactly what node/attribute is missing. Even running with chef-client -l debug doesn't help. knife cookbook test <x> doesn't help because syntactically it is correct. Is there a way to get it to print out the exact line number that is causing the error? The recipe may contain 10s or 100s of attributes so it is a huge time waster going through line by line to discover a typo.
I wrote Chef Sugar's deep_fetch method precisely for this reason.
The error you are getting is just the by-product of Ruby hashes. For more information on deep_fetch, you can also see my blog post on the subject: https://sethvargo.com/delicious-new-chef-sugars/

Resources