I'm trying to use a yaml file but whenever I try to read from it I get:
Exception in thread "main" mapping values are not allowed here in "<reader>", line 7, column 19
The yaml file:
topology:
- name: teststormsignals
jar: storm-signals-0.2.1-SNAPSHOT.jar
topologyclass: backtype.storm.contrib.signals.test.SignalTopology
packaging: mvn package
repository: https://github.com/ptgoetz/storm-signals.git
I ran it through a yaml parser but it says it's valid.
Try this (double quoted):
repository: "https://github.com/ptgoetz/storm-signals.git"
Related
I am new to Apache Daffodil and am trying to follow the example provided: https://daffodil.apache.org/examples/
I am trying to parse the file, simpleCSV using the schema, csv.dfdl.xsd. Both files are in the same folder as the daffodil.bat folder.
In cmd, I run .\daffodil.bat parse --schema csv.dfdl.xsd simpleCSV.csv
I get the error:
[error] Schema Definition Error: Error loading schema due to org.xml.sax.SAXParseException; DaffodilXMLLoader: Unable to resolve schemaLocation='csv-base-format.dfdl.xsd'.
Schema context: file:/C:/Users/rinat/OneDrive/Desktop/WORK%20STUFF/apache-daffodil-3.4.0-bin/apache-daffodil-3.4.0-bin/bin/csv.dfdl.xsd Location in file:/C:/Users/rinat/OneDrive/Desktop/WORK STUFF/apache-daffodil-3.4.0-bin/apache-daffodil-3.4.0-bin/bin/csv.dfdl.xsd`
How do I resolve this?
You need to copy csv-base-format.dfdl.xsd (found in the same src/ directory as csv.dfdl.xsd) into the same directory as your csv.dfdl.xsd file. That file provides a number of default settings imported by csv.dfdl.xsd, so they must be in the same directory.
I'm using Helm 3 and microk8s. When I try a dry run:
microk8s.helm install <...> --dry-run --debug
I see errors like
Error: YAML parse error on ./templates/deployment.yaml: error converting YAML to JSON: yaml: mapping values are not allowed in this context
helm.go:76: [debug] error converting YAML to JSON: yaml: mapping values are not allowed in this context
YAML parse error on ./templates/deployment.yaml
helm.sh/helm/v3/pkg/releaseutil.(*manifestFile).sort
/home/circleci/helm.sh/helm/pkg/releaseutil/manifest_sorter.go:129
helm.sh/helm/v3/pkg/releaseutil.SortManifests
/home/circleci/helm.sh/helm/pkg/releaseutil/manifest_sorter.go:98
helm.sh/helm/v3/pkg/action.(*Configuration).renderResources
/home/circleci/helm.sh/helm/pkg/action/install.go:455
helm.sh/helm/v3/pkg/action.(*Install).Run
/home/circleci/helm.sh/helm/pkg/action/install.go:214
main.runInstall
...
I found several questions with a similar error, but the answer is usually just asking for read a chart code. I have a large chart and need to debug this error on my own. And guessing which line it complains about doesn't seem meaningful.
Is there a way to know what exactly is wrong in the config?
Try: helm template ... --debug > foo.yaml
This'll output the rendered chart to foo.yaml (and the helm error stacktrace to stderr). Then find the template filename in question from the helm error and look through the rendered chart for a line like # Source: the-template-name.yaml. YAML to JSON conversion is done separately for each YAML object, so you may have multiple instances of the same # Source: the-template-name.yaml.
Look n lines below each # Source: ... comment for an error, where n is the line number of the error reported by Helm render.
I am unable to use #Grab in Jenkins pipeline. Need help here. following is the error.
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: 1: unable to resolve class org.yaml.snakeyaml.Yaml
# line 1, column 1.
#Grab('org.yaml:snakeyaml:1.17')
^
1 error
Following is the pipeline code
test.groovy
#Grab('org.yaml:snakeyaml:1.17')
import org.yaml.snakeyaml.Yaml
node{
stage('test'){
Yaml parser = new Yaml()
def a = """
---
environment: production
classes:
nfs::server:
exports:
- /srv/share1
- /srv/share3
parameters:"""
parser.load(a)
print(parser.load(a))
}
}
The error occurs in pipeline with definition "Pipeline script from SCM" and works fine with definition "pipeline script" and Script console
Following code works with Script Console (Manage Jenkins -> Script console)
#Grab('org.yaml:snakeyaml:1.17')
import org.yaml.snakeyaml.Yaml
Yaml parser = new Yaml()
def a = """
---
environment: production
classes:
nfs::server:
exports:
- /srv/share1
- /srv/share3
parameters:"""
parser.load(a)
print(parser.load(a))
output
[environment:production, classes:[nfs::server:[exports:[/srv/share1, /srv/share3]]], parameters:null]
Groovy Grab uses Ivy to manage the recovery of jars. You need to add Shared Groovy Libraries Plugin. By default, it gets jars from maven central, but you can specify other repositories with the annotation #GrabResolver. Taken from here
Also, you can add jar file to ./.groovy/grapes/org.yaml/snakeyaml/jars/snakeyaml-1.17.jar in you Jenkins Home directory.
And the second case does not use this library and use standard readYaml writeYaml functions from Pipeline Utility Steps
Can anyone help me to figure out the following error while deploying react app on AWS elastic beanstalk -
2019-08-01 04:37:21 ERROR The configuration file .ebextensions/nodecommand.
config in application version app-5466-190801_100700 contains invalid YAML or JS
ON. YAML exception: Invalid Yaml: mapping values are not allowed here
in "<reader>", line 3, column 16:
option_settings:
^
, JSON exception: Invalid JSON: Unexpected character (/) at position 0.. Update
the configuration file.
2019-08-01 04:37:21 ERROR Failed to deploy application.
Following is my nodecommand.config file -
option_settings:
aws: elasticbeanstalk:container:nodejs:
NodeCommand: "node server.compiled.js"
Update -
I followed this link to deploy React app on AWS elastic beanstalk and stuck on above error -
https://medium.com/#wlto/how-to-deploy-an-express-application-with-react-front-end-on-aws-elastic-beanstalk-880ff7245008
This is what's shown in the linked tutorial:
option_settings:
aws:elasticbeanstalk:container:nodejs:
NodeCommand: "node server.compiled.js"
This is the YAML in your question:
option_settings:
aws: elasticbeanstalk:container:nodejs:
NodeCommand: "node server.compiled.js"
Can you spot the difference?
Spoiler: You've put a space after aws:. This causes the YAML parser to assume aws: is a mapping key with the value "elasticbeanstalk:container:nodejs:". However, the next line, which also starts with a mapping key (NodeCommand), is indented more, which would only be allowed if the previous line was a mapping key without a value.
If you remove the space, it correctly parses aws:elasticbeanstalk:container:nodejs as a mapping key and the following line as its value.
I was not actually had the problem fixed. I found this document[1] that says because my environment is using Amazon Linux 2, the ebextensions is not recommended. (But some of my ebextensions are still working. I have no idea about that). Instead, Buildfile, Procfile, and platform hooks are recommended. Therefore, I created a Procfile with the following content to make the Node server run with the command node index.js.
Procfile
web: node index.js
[1] https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/platforms-linux-extend.html
I'm trying to create a SpringBoot application using gradle.
But I have some issue with the processResources task.
In my case, I have some JAR files in 'src/main/resources/libs', these files are used to my JAVA BuildPath.
I have allready try to add filter on application.properties only, but it's doesn't work.(Gradle processResources - file contains $ character)
I have this error on 'processResources' task:
Could not copy file 'xxx\src\main\resources\libs\myJar1.jar' to 'xxx\build\resources\main\libs\myJar1.jar'.
...
Root cause: groovy.lang.GroovyRuntimeException: Failed to parse template script (your template may contain an error or be trying to use expressions not currently supported): startup failed:
SimpleTemplateScript11.groovy: 1: unexpected char: '\' # line 1, column 232.
6wPíÔà¬ÑüçZ Ç�8X›y«Ý«:|8“!\dÖñ%BW$ J
^
processResources {
filesNotMatching("**/libs/*") {
expand( // my application properties that should be filtered
'project': ['version': project.version],
'build': ['timestamp': new Date().format('dd/MM/yyyy HH:mm')]
)
}
}
Similar to the following answer: https://stackoverflow.com/a/36731250/2611959