The Behat command line configuration options support the definition of multiple formatters: http://docs.behat.org/guides/6.cli.html#format-options.
I want to define multiple formatters in a YAML configuration file instead, but I suck at YAML and don't seem able to get the syntax correct.
So far I have:
default:
- formatter:
name: junit
parameters:
output_path: xml
- formatter:
name: pretty
parameters: ~
extensions:
Behat\MinkExtension\Extension:
base_url: 'http://myurl.com'
javascript_session: sahi
browser_name: chrome
goutte: ~
sahi: ~
Which gives the error:
You cannot define a mapping item when in a sequence
I've also tried this defining the elements as a list within a single formatter, but says that the formatter cannot contain numbered indexes.
In Behat 3.x use:
build:
formatters:
progress:
junit: [./build/logs/behat]
html: [./build/behat/index.html]
In Behat 2.x, use a comma to separate the formatter names just like in the command line:
default:
formatter:
name: progress,junit,html
parameters:
output_path: ,./build/logs/behat,./build/behat/index.html
Related
I have a springboot project in which I have developed an api with OpenApi in yml format and autogenerated the classes with openapi-generator-maven-plugin. The yml is as follows:
openapi: 3.0.2
info:
version: 0.0.1-SNAPSHOT
title: Example API
servers:
- description: Localhost
url: 'http://localhost:{port}/my-first-api'
variables:
port:
default: '8080'
tags:
- name: Example
paths:
/api/v1/examples:
get:
summary: Get examples
operationId: getExamples
description: Obtain a list of available examples.
tags:
- Example
responses:
'200':
description: OK
content:
application/json:
schema:
type: array
items:
$ref: '#/components/schemas/Example'
components:
schemas:
Example:
title: Example
type: object
properties:
description:
type: string
check:
type: boolean
example:
description: 'Example'
check: true
As you can see, I have defined that the local base path is:
http://localhost:8080/my-first-api
And later for the only available endpoint that is added:
/api/v1/examples
Therefore, I expected that once the artifact was started locally, I could consume the endpoint from this URL:
http://localhost:8080/my-first-api/api/v1/examples
But my surprise is that it doesn't work, this URL is not found. But if it finds the following:
http://localhost:8080/api/v1/examples
As you can see, it accesses without the "my-first-api" part of the path, but I need this part of the path to be there too... What could be happening?
Thanks!
In my tests, it worked just fine. The my-path part got changed, matching the spec changes.
#RequestMapping("${project.name.base-path:/my-path}")
But as you can see, spring would allow you to override this base URL using the project.name.base-path property. (The actual property name is probably different for you)
So, my suggestion would be:
Check if the annotation on the generated Controller changes at all.
If it does, check if the property is overridden at some point.
Check if you are setting spring's own base URL with the property server.servlet.context-path
I am trying to write a CircleCI config that will allow me to reuse both whole list/mapping(?) entries and its properties.
Having the following:
image_definitions:
docker:
- &default_localstack_image
image: localstack/localstack:0.10.3
environment:
KINESIS_LATENCY: 0
defaults_env: &defaults_env
environment:
PG_PORT: 5432
PG_USER: root
I would like to be able to replace:
test: &test
docker:
- image: localstack/localstack:0.10.3
<<: *defaults_env
with something like:
test: &test
docker:
- *default_localstack_image
<<: *defaults_env
but it doesn't work this way.
I've also tried:
test: &test
docker:
- *default_localstack_image
*defaults_env
but that also didn't work.
How can I do that?
According to the documentation:
test: &test
docker:
- <<: [*default_localstack_image, *defaults_env]
However, be aware that the merge feature is not part of the YAML spec and has only been defined for outdated YAML 1.1. I don't know if this is actually implemented. Even if it is, be aware that this merge key is the odd man out – violating the spec that says every tag is to be mapped to a type, it is instead interpreted as transformation instruction even though the loading process as defined by the spec has no place for executing transformation steps.
Similar functionality (for example for concatenating scalars) is more or less frequently requested on SO but is not available (and will probably never be) and if you need to do something like this, my advice is to do what e.g. Ansible and SaltStack do and use a templating engine as preprocessor for your YAML file.
I don't know how to set the full path of environement JSON file.
I guess the path is composed of multiple level, but it's not described in doc
The path is now environment/dev.json, what is a correct path/name ? My tests on hash in cookbooks fails
Sample JSON :
{ "foo": { "bar": "base" } }
I need to test from my cookbooks :
if node[:foo][:bar] == "base"
puts "ok"
end
For this snippet, can anyone explain this syntax to query a hash ? Is it Ruby or Chef specific syntax ? Why not node['foo']['bar'] ?
My .kitchen.yml file :
---
driver:
name: vagrant
provisioner:
name: chef_zero
environments_path: 'environment'
client_rb:
environment: dev
platforms:
- name: ubuntu-1204
suites:
- name: default
run_list:
- recipe[mysql::default]
attributes:
To answer your question about the Ruby/Chef syntax, that's Ruby syntax.
Ruby hashes (aka hashtables or dictionaries in other languages) can have any object as a key, but in the Ruby world it's most common to use symbols.
JSON objects can only have strings as their keys, so when you convert JSON into a Ruby hash it really depends on the developer as to whether they choose to leave the keys as strings or convert them to symbols to be more idiomatic.
You can see there's a flag in JSON.parse called symbolize_names which will automatically convert them for you.
That isn't an environment file, it needs to look the it says in the docs, but the default path is test/integration/environments/.
I'm trying to build certain software on each machine locally. The playbook would download the source tarball (using get_url), configure and build it.
I'd like to define the list of items to build as something like the below:
srcpkg:
python:
ver: "3.7.0"
sha: "0382996d1ee6aafe59763426cf0139ffebe36984474d0ec4126dd1c40a8b3549"
url: "https://www.python.org/ftp/python/{{ srcpkg.python.ver }}/python/Python-{{ srcpkg.python.pyver }}.tar.xz"
Unfortunately, such references to itself (the url refers to ver in the above example) cause Ansible to throw a "recursive loop detected" error at runtime.
Is there a way -- either in Ansible or, maybe, simply in Yaml -- to define things so that I wouldn't have to repeat the version in more than one place?
Update: tried to use anchor/reference:
srcpkg:
python:
ver: &ver "3.7.0"
sha: "0382996d1ee6aafe59763426cf0139ffebe36984474d0ec4126dd1c40a8b3549"
url: "https://www.python.org/ftp/python/{{ *ver }}/python/Python-{{ *ver }}.tar.xz"
to no avail: Ansible complains of "unexpected '*'".
When you write the following in YAML:
url: "https://www.python.org/ftp/python/{{ *ver }}/python/Python-{{ *ver }}.tar.xz"
the right side of the : specifies a scalar value. YAML aliases are not resolved in parts of a scalar.
Ansible thus creates a string variable with the value: https://www.python.org/ftp/python/{{ *ver }}/python/Python-{{ *ver }}.tar.xz.
And for Jinja2 *ver is a syntax error.
What you can do is to use a helper Ansible variable (YAML uses eager evaluation for aliases, Jinja2 uses lazy evaluation for variables):
srcpkg:
python:
ver: &ver "3.7.0"
sha: "0382996d1ee6aafe59763426cf0139ffebe36984474d0ec4126dd1c40a8b3549"
url: "https://www.python.org/ftp/python/{{ python_version }}/python/Python-{{ python_version }}.tar.xz"
python_version: *ver
I have a bunch of concourse pipeline files that look like the following:
---
resources:
- name: example
type: git
source:
uri: git#github.internal.me.com:me/example.git
branch: {{tracking_branch}}
private_key: {{ssh_key}}
paths:
- code/src/do/teams/sampleapp
params:
depth: 1
- name: deploy-image
type: docker-image
source:
repository: {{docker_image_url}}
And I want to parse them in ruby to perform a bunch of transformations (like validating them and updating some keys if they are missing).
Problem is, whenever I try to load and them dump them back to files the pieces that have {{something}} become:
branch:
? tracking_branch:
:
private_key:
? ssh_key:
:
Why is it doing this and is there any way I can configure the parser not to do this? Just leave these variables as they are?
To avoid conflict with YAML's internal syntax you need to quote your values:
---
resources:
- name: example
type: git
source:
uri: git#github.internal.me.com:me/example.git
branch: '{{tracking_branch}}'
private_key: '{{ssh_key}}'
paths:
- code/src/do/teams/sampleapp
params:
depth: 1
This sort of thing comes up in Ansible configuration files all the time for similar reasons.
The { and } characters are used in Yaml for flow mappings (i.e. hashes). If you don’t provide a value for a mapping entry you get nil.
So in the case of branch: {{tracking_branch}}, since there are two pairs of braces, you get a hash with a key branch and value (in Ruby) of
{{"tracking_branch"=>nil}=>nil}
When this is dumped back out to Yaml you get the somewhat awwkward and verbose:
branch:
? tracking_branch:
:
The solution is simply to quote the value:
branch: "{{tracking_branch}}"
Completely forgot that concourse now offers ((var-name)) for templating, just switched to that instead of {{var-name}} at the pipelines and the YAML parser is now happy!