Checking the validity of a YAML using https://onlineyamltools.com/convert-yaml-to-json
The below YAML is correct
# Valid yaml (field "name" placed at LAST position)
match:
- uri:
prefix: "/mysvc1/"
route:
- destination:
host: myservice1
port:
number: 80
name: "svc1-routes"
However, if I move the field name to first position, the YAML becomes invalid. What is the reason?
# Invalid yaml (field "name" placed at FIRST position)
match:
name: "svc1-routes" # <---- ERROR ----
- uri:
prefix: "/mysvc1/"
route:
- destination:
host: myservice1
port:
number: 80
The error message:
Error: YAMLException: end of the stream or a document separator is expected at line 4, column 1:
- uri:
^
In contrary to your comment, name and match are on the same level because they share the same indentation. name is in no way nested in match (nor is route).
The list items, however, are nested in match since YAML understands the - as parts of the indentation, hence the list items are considered more indented than match and are thus nested in it.
Concerning your error:
name: "svc1-routes"
- uri:
In this part, the mapping key name is assigned the scalar value svc1-routes. Each mapping key may only have one value. On the next line, a sequence starts which is on a deeper indentation level (as explained above) but YAML can't put it anywhere because the key name already has a value. This is why it issues an error.
You can freely switch the mapping keys together with their nested values, e.g.:
route:
- destination:
host: myservice1
port:
number: 80
name: "svc1-routes"
match:
- uri:
prefix: "/mysvc1/"
This will load to the same structure as per YAML spec.
Related
I tried to separate my resources field in multiple files in project. Most of then worked fine, but only this file, where I declare CognitoUserPool + CognitoUserPoolClient trowed this exception:
Unhandled rejection YAMLException: duplicated mapping key in "/home/uriel/Desktop/Foo/Backend/MultipleFile/backend-foo/resources/cognitoUserPoolFoo.yml" at line 20, column 3:
Type: AWS::Cognito::UserPoolClient
^
at generateError (/home/uriel/.nvm/versions/node/v10.16.0/lib/node_modules/serverless/node_modules/js-yaml/lib/js-yaml/loader.js:167:10)
I've already checked logical and indentation problems. This same lines worked on single file, they only throw this error when I move them to another YML file and import it.
Main YML file importing the another one
plugins:
- serverless-webpack
- serverless-python-requirements
- serverless-offline
resources:
- ${file(resources/cognitoUserPoolFoo.yml)}
File imported, the one that throw error.
Resources:
CognitoUserPoolFoo:
Type: AWS::Cognito::UserPool
Properties:
MfaConfiguration: OFF
UserPoolName: foo-${self:provider.stage}
EmailConfiguration:
ReplyToEmailAddress: foo#test.com
SourceArn: "arn:aws:ses:us-east-1:123456789012:identity/foo#test.com"
AutoVerifiedAttributes:
- email
Policies:
PasswordPolicy:
MinimumLength: 6
RequireLowercase: True
RequireNumbers: True
RequireSymbols: False
RequireUppercase: True
FooCognitoUserPoolClient:
Type: AWS::Cognito::UserPoolClient
Properties:
ClientName: FooWebApp-${self:provider.stage}
GenerateSecret: false
UserPoolId:
Ref: "CognitoUserPoolFoo"
I am trying to deploy the lambda function along with the serverless.yml file to AWS, but it throw below error
The following is the function defined in the YAML file
functions:
s3-thumbnail-generator:
handler:handler.s3_thumbnail_generator
events:
- s3:
bucket: ${self:custom.bucket}
event: s3.ObjectCreated:*
rules:
- suffix: .png
plugins:
- serverless-python-requirements
Error I am getting:
can not read a block mapping entry; a multiline key may not be an implicit key in serverless.yml" at line 45, column 10:
I would need to understand how to fix this issue in YAML file in order to deploy to the function to AWS?
The problem is that there is no value indicator (:) at the end of the line:
handler:handler.s3_thumbnail_generator
so the parser continues to try and gather a multi-line plain scalar by adding events followed by a value indicator. But a multi-line plain scalar cannot be a key in YAML.
It is unclear what your actual error is. It might be that you need to add the value indicator and have a colon emmbedded in your key:
functions:
s3-thumbnail-generator:
handler:handler.s3_thumbnail_generator:
events:
- s3:
bucket: ${self:custom.bucket}
event: s3.ObjectCreated:*
rules:
- suffix: .png
plugins:
- serverless-python-requirements
Or it could be that that colon should have been a value indicator (which usually needs a following space) and you were sloppy with indentation:
functions:
s3-thumbnail-generator:
handler: handler.s3_thumbnail_generator
events:
- s3:
bucket: ${self:custom.bucket}
event: s3.ObjectCreated:*
rules:
- suffix: .png
plugins:
- serverless-python-requirements
If it is your original file there is a syntax error in your YAML file. I added a note under the line of possible error:
functions:
s3-thumbnail-generator:
handler:handler.s3_thumbnail_generator
events:
- s3:
bucket: ${self:custom.bucket}
event: s3.ObjectCreated:*
rules:
- suffix: .png
^^^ this line should be indented one level
plugins:
- serverless-python-requirements
I am looking for a way to dynamically set the key using the path of the file below.
For example if I have this YAML:
prospectors.config:
- fields:
queue_name: <somehow get the globbed string below in here>
paths:
- /var/log/casino/*.log
type: log
output.redis:
hosts:
- "producer:6379"
key: "%{[fields.queue_name]}"
And then I had a file called /var/log/casino/test.log, then key would become test.
Im not sure that what you want is possible.
You could use the source field and configure your Redis output using that as the key:
output.redis:
hosts:
- "producer:6379"
key: "%{source}"
This would have the disadvantage of being the absolute path of the source file, not the basename as your question asks for.
If you have a small number of possible basename patterns, and want a queue for each. For example, you have files:
/common/path/test-1.log
/common/path/foo-0.log
/common/path/01-bar.log
/common/path/test-3.log
...
and wanted to have three queues in redis test, foo and bar you could use the source field and the conditionals available in the keys configuration of redis output something like this
output.redis:
hosts:
- "producer:6379"
key: "default_key"
keys:
- key: "test_key"
when.contains:
source: "test"
- key: "foo_key"
when.contains:
source: "foo"
- key: "bar_key"
when.contains:
source: "bar"
When I review the cryptogen(a fabric command) config file . I saw there symbol.
Profiles:
SampleInsecureSolo:
Orderer:
<<: *OrdererDefaults ## what is the `<<`
Organizations:
- *ExampleCom ## what is the `*`
Consortiums:
SampleConsortium:
Organizations:
- *Org1ExampleCom
- *Org2ExampleCom
Above there a two symbol << and *.
Application: &ApplicationDefaults # what is the `&` mean
Organizations:
As you can see there is another symbol &.
I don't know what are there mean. I didn't get any information even by reviewing the source code (fabric/common/configtx/tool/configtxgen/main.go)
Well, those are elements of the YAML file format, which is used here to provide a configuration file for configtxgen. The "&" sign mean anchor and "*" reference to the anchor, this is basically used to avoid duplication, for example:
person: &person
name: "John Doe"
employee: &employee
<< : *person
salary : 5000
will reuse fields of person and has similar meaning as:
employee: &employee
name : "John Doe"
salary : 5000
another example is simply reusing value:
key1: &key some very common value
key2: *key
equivalent to:
key1: some very common value
key2: some very common value
Since abric/common/configtx/tool/configtxgen/main.go uses of the shelf YAML parser you won't find any reference to these symbols in configtxgen related code. I would suggest to read a bit more about YAML file format.
in yaml if data is like
user: &userId '123'
username: *userId
equivalent yml is
user: '123'
username: '123'
or
equivalent json will is
{
"user": "123",
"username": "123"
}
so it basically allows to reuse data, you can also try with array instead of single value like 123
try converting below yml to json using any yml to json online converter
users: &users
k1: v1
k2: v2
usernames: *users
boot config *.yml file.
server.port: 2222
spring:
application:
name: x-service
data:
mongodb:
host: db.x
database: x
# userName: ${db.userName}
# password: ${db.password}
rabbitmq:
# port: ${queue.port}
host: queue.x
username: ${queue.userName}
password: ${queue.password}
listener:
max-concurrency: 1
prefetch: 1
acknowledge-mode: auto
auto-startup: true
dynamic: true
###########DEV##############
spring.profiles: dev
#queue.virtual.host: xuser
queue.userName: guest
queue.password: guest
queue.port: 5672
#db.userName:
#db.password:
falconUrl: http://x.y.com
##########DEFAULT###########
spring.profiles: qa
queue.virtual.host: xuser
queue.userName: xuser
queue.password: xpassword
queue.port: 3456
db.userName: xuser
db.password: xpassword
falconUrl: http://x.z.com
It gives me org.yaml.snakeyaml.parser.ParserException: while parsing MappingNode
in 'reader', line 1, column 1:
server.port: 2222
^
Duplicate key: spring.profiles
in 'reader', line 47, column 1:
error. If I comment properties of one of the profile.It works fine.
Can anyone please suggest what is wrong here?
The error message is actually quite specific and accurate: in the top-level mapping of your YAML file (the one starting with the key-value pair server.port and 2222 you have two identical keys (the scalar spring.profiles). And duplicate keys are not allowed in YAML, as the are required to be unique according to the specification.
The underlying problem is that if you want to change the configuration depending on the environment, you'll have to follow the documented specification, which states that:
A YAML file is actually a sequence of documents separated by --- lines, and each document is parsed separately to a flattened map.
If a YAML document contains a spring.profiles key, then the profiles value (comma-separated list of profiles) is fed into the Spring Environment.acceptsProfiles() and if any of those profiles is active that document is included in the final merge (otherwise not)
Your YAML file is a single implicit YAML document because it lacks the directive indicator --- that occurs at the beginning of an explicit YAML document. (the YAML directive ... that indicates end-of-document might not be supported properly supported by snake-yaml, at least it is not mentioned in the examples).
Your code should look like:
server.port: 2222
spring:
application:
name: x-service
data:
mongodb:
host: db.x
database: x
# userName: ${db.userName}
# password: ${db.password}
rabbitmq:
# port: ${queue.port}
host: queue.x
username: ${queue.userName}
password: ${queue.password}
listener:
max-concurrency: 1
prefetch: 1
acknowledge-mode: auto
auto-startup: true
dynamic: true
###########DEV##############
---
spring.profiles: dev
#queue.virtual.host: xuser
queue.userName: guest
queue.password: guest
queue.port: 5672
#db.userName:
#db.password:
falconUrl: http://x.y.com
##########DEFAULT###########
---
spring.profiles: qa
queue.virtual.host: xuser
queue.userName: xuser
queue.password: xpassword
queue.port: 3456
db.userName: xuser
db.password: xpassword
falconUrl: http://x.z.com
The statement in the documentation that "each document is parsed separately to a flattened map" is of course only true, if each of the documents has a mapping at the top level. That is what spring-boot expects, but you can as easily have a scalar or sequence at the top level of a document, and such documents are certainly not parsed by snake-yaml to a flattened map.