I'm trying to create a SpringBoot application using gradle.
But I have some issue with the processResources task.
In my case, I have some JAR files in 'src/main/resources/libs', these files are used to my JAVA BuildPath.
I have allready try to add filter on application.properties only, but it's doesn't work.(Gradle processResources - file contains $ character)
I have this error on 'processResources' task:
Could not copy file 'xxx\src\main\resources\libs\myJar1.jar' to 'xxx\build\resources\main\libs\myJar1.jar'.
...
Root cause: groovy.lang.GroovyRuntimeException: Failed to parse template script (your template may contain an error or be trying to use expressions not currently supported): startup failed:
SimpleTemplateScript11.groovy: 1: unexpected char: '\' # line 1, column 232.
6wPíÔà¬ÑüçZ Ç�8X›y«Ý«:|8“!\dÖñ%BW$ J
^
processResources {
filesNotMatching("**/libs/*") {
expand( // my application properties that should be filtered
'project': ['version': project.version],
'build': ['timestamp': new Date().format('dd/MM/yyyy HH:mm')]
)
}
}
Similar to the following answer: https://stackoverflow.com/a/36731250/2611959
Related
I am new to Apache Daffodil and am trying to follow the example provided: https://daffodil.apache.org/examples/
I am trying to parse the file, simpleCSV using the schema, csv.dfdl.xsd. Both files are in the same folder as the daffodil.bat folder.
In cmd, I run .\daffodil.bat parse --schema csv.dfdl.xsd simpleCSV.csv
I get the error:
[error] Schema Definition Error: Error loading schema due to org.xml.sax.SAXParseException; DaffodilXMLLoader: Unable to resolve schemaLocation='csv-base-format.dfdl.xsd'.
Schema context: file:/C:/Users/rinat/OneDrive/Desktop/WORK%20STUFF/apache-daffodil-3.4.0-bin/apache-daffodil-3.4.0-bin/bin/csv.dfdl.xsd Location in file:/C:/Users/rinat/OneDrive/Desktop/WORK STUFF/apache-daffodil-3.4.0-bin/apache-daffodil-3.4.0-bin/bin/csv.dfdl.xsd`
How do I resolve this?
You need to copy csv-base-format.dfdl.xsd (found in the same src/ directory as csv.dfdl.xsd) into the same directory as your csv.dfdl.xsd file. That file provides a number of default settings imported by csv.dfdl.xsd, so they must be in the same directory.
I am unable to use #Grab in Jenkins pipeline. Need help here. following is the error.
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: 1: unable to resolve class org.yaml.snakeyaml.Yaml
# line 1, column 1.
#Grab('org.yaml:snakeyaml:1.17')
^
1 error
Following is the pipeline code
test.groovy
#Grab('org.yaml:snakeyaml:1.17')
import org.yaml.snakeyaml.Yaml
node{
stage('test'){
Yaml parser = new Yaml()
def a = """
---
environment: production
classes:
nfs::server:
exports:
- /srv/share1
- /srv/share3
parameters:"""
parser.load(a)
print(parser.load(a))
}
}
The error occurs in pipeline with definition "Pipeline script from SCM" and works fine with definition "pipeline script" and Script console
Following code works with Script Console (Manage Jenkins -> Script console)
#Grab('org.yaml:snakeyaml:1.17')
import org.yaml.snakeyaml.Yaml
Yaml parser = new Yaml()
def a = """
---
environment: production
classes:
nfs::server:
exports:
- /srv/share1
- /srv/share3
parameters:"""
parser.load(a)
print(parser.load(a))
output
[environment:production, classes:[nfs::server:[exports:[/srv/share1, /srv/share3]]], parameters:null]
Groovy Grab uses Ivy to manage the recovery of jars. You need to add Shared Groovy Libraries Plugin. By default, it gets jars from maven central, but you can specify other repositories with the annotation #GrabResolver. Taken from here
Also, you can add jar file to ./.groovy/grapes/org.yaml/snakeyaml/jars/snakeyaml-1.17.jar in you Jenkins Home directory.
And the second case does not use this library and use standard readYaml writeYaml functions from Pipeline Utility Steps
I am running a Spring Cloud Data Flow in local and I want to override the path of the task logs.
java -jar spring-cloud-dataflow-server-2.0.0.M1.jar ^
--spring.datasource.url=jdbc:mysql://localhost:3306/dataflow ^
--spring.datasource.username=root ^
--spring.datasource.password=password ^
--spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver ^
--spring.cloud.deployer.local.working-directories-root=C:/logs
I've tried to include the property in dataflow-server.yml of spring-cloud-starter-dataflow-server-local projet as
spring:
cloud:
deployer:
local:
working-directories-root: c:/logs/spring-cloud-dataflow
When I launch the task, I obtain:
Logs will be in C:\Users\Usuario1\AppData\Local\Temp\task-app...
Since you're on Windows, you may have to try the path as: c:\\logs\\spring-cloud-dataflow.
Here's the piece of logic that takes the supplied directory path to where the logs will be transferred.
I am trying to run the kafka-connect-elasticsearch plugin from Confluent in order to stream topics from Kafka (V0.11.0.1) directly into Elasticsearch (without putting Logstash in between).
I build the connector using Maven -
$ cd kafka-connect-elasticsearch
$ mvn clean package
I then created the require configuration file -
name=es-cluster-lab
connector.class=io.confluent.connect.elasticsearch.ElasticsearchSinkConnector
tasks.max=1
topics=filebeats-test
topic.index.map=filebeats-test:kafka_test_index
key.ignore=true
schema-ignore=true
connection.url=http://elastic:9200
type.name=log
As per the new Kafka Classpath Isolation spec, I also added the following line to my connect-standalone.properties file -
plugin.path=/home/kafka/kafka-connect-elasticsearch-3.3.0/target/kafka-connect-elasticsearch-3.3.0-development/share/java/kafka-connect-elasticsearch/
I go to run the script ...
bin/connect-standalone.sh config/connect-standalone.properties config/elasticsearch-connect.properties
... and receive the below error.
[2017-09-14 16:08:26,599] INFO Loading plugin from: /home/kafka/kafka-connect-elasticsearch-3.3.0/target/kafka-connect-elasticsearch-3.3.0-development/share/java/kafka-connect-elasticsearch/slf4j-api-1.7.25.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:176)
Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.collect.Sets$SetView.iterator()Lcom/google/common/collect/UnmodifiableIterator;
at org.reflections.Reflections.expandSuperTypes(Reflections.java:380)
at org.reflections.Reflections.<init>(Reflections.java:126)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanPluginPath(DelegatingClassLoader.java:221)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanUrlsAndAddPlugins(DelegatingClassLoader.java:198)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.registerPlugin(DelegatingClassLoader.java:190)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initLoaders(DelegatingClassLoader.java:150)
at org.apache.kafka.connect.runtime.isolation.Plugins.<init>(Plugins.java:47)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:68)
I also tried to move the JAR files into the /app/kafka/libs directory (default CLASSPATH) and even tried to create a subdirectory /app/kafka/libs/connect_libs and add that manually to my CLASSPATH environment variable.
Not sure what my next step is besides putting Logstash between Kafka and Elastic.
try to change the guava version to 20 before you build it
I think you are missing the star '*' at the end of the path of the plugin path.
plugin.path=/home/kafka/kafka-connect-elasticsearch-3.3.0/target/kafka-connect-elasticsearch-3.3.0-development/share/java/kafka-connect-elasticsearch/*
I'm trying to use a yaml file but whenever I try to read from it I get:
Exception in thread "main" mapping values are not allowed here in "<reader>", line 7, column 19
The yaml file:
topology:
- name: teststormsignals
jar: storm-signals-0.2.1-SNAPSHOT.jar
topologyclass: backtype.storm.contrib.signals.test.SignalTopology
packaging: mvn package
repository: https://github.com/ptgoetz/storm-signals.git
I ran it through a yaml parser but it says it's valid.
Try this (double quoted):
repository: "https://github.com/ptgoetz/storm-signals.git"