GenerateSwaggerCode task does not invalidate local cache for changes done in £ref file - gradle

I'm running gradle-swagger-generator-plugin GenerateSwaggerCode task on an input yaml that contains a $refs to another file. Gradle build cache is enabled.
Task output is loaded FROM-CACHE when changes are done the referenced file.
I'm looking for a way to configure the plugin to invalidate the cache and rerun generation if changes are done in ref files.
task definition:
swaggerSources {
myApi {
inputFile = file('./api.yaml')
code {
language = 'spring'
configFile = file('./swagger-config.json')
}
}
}
api.yaml:
swagger: '2.0'
info:
title: My api
version: 1.0.0
host: localhost
definitions:
MyDef:
$ref: './another.yaml#/definitions/MyDef'
swagger.generator version : 2.18.2
swagger-codegen version: 2.4.18
gradle version: 6.8.3

Specifying code.inputs.files in the plugin config seems to fix the problem.
https://github.com/int128/gradle-swagger-generator-plugin/issues/144

Related

Dynamic UDF in Apache Drill cluster

I have drill cluster, with 4 drillbits (drill 1.14). But I can not use dynamic UDF feature in cluster for some kind of reason. Every time, I was confronting with troubles.
Let me present 2 scenarios:
Scenario 1
Here is the config (configs are same for all drillbits):
drill.exec: {
cluster-id: "drill-test",
zk: {
connect: "vm29.local:2181,vm32.local:2181,vm39.local:2181",
root: "drill"
},
sys.store.provider.zk.blobroot: "hdfs://vm29.local:9000/apps/drill/pstore/",
http: {
enabled: true,
ssl_enabled: false,
port: 8047
session_max_idle_secs: 3600, # Default value 1hr
cors: {
enabled: true,
allowedOrigins: ["*"],
allowedMethods: ["GET", "POST", "HEAD", "OPTIONS"],
allowedHeaders: ["X-Requested-With", "Content-Type", "Accept", "Origin"],
}
}
}
drill.exec.udf: {
retry-attempts: 5,
directory: {
fs: "hdfs://vm29.local:9000/",
root: "/drill",
base: "/udf",
local: ${drill.exec.udf.directory.base}"/local",
staging: ${drill.exec.udf.directory.base}"/staging",
registry: ${drill.exec.udf.directory.base}"/registry",
tmp: ${drill.exec.udf.directory.base}"/tmp"
}
}
As You see, I use hdfs for UDF in that scenario.
When I put jar files into 'staging' folder, and run 'CREATE FUNCTION USING JAR' - it registers function successfully. BUT then I can use it only on drillbit where I registered it.
For example if I ran command in web UI in vm29 - I can use function only in vm29.
If in additional, I try to register jar in different drillbit - I get 'already registered' error - but can not use it.(not found error)
JAR files present in hdfs://vm29.local:9000/drill/udf/registry and metadata in ZK registry.
Scenario 2
Config the same, only with difference - all drillbits use their local filesystem for UDF folder.
In that case - I can register/unregister function - but I can not use it on every drillbit (not found error). Jar files present in /UDF/registry folder, and metadata in zk registry - but do not work.
What am I doing wrong?
I can not found any description of step-by-step instruction, about using Dynamic UDF feature in cluster. Maybe You know one?
Thanks.
updated:
I just thought:
I use web console for queries. Maybe it has difference - create function through web console or jdbc:zk connection? (I will test)
Cause & Results
This is a bug in drill 1.14
Was reported in Drill Jira
Fix with explanation: Drill GitHub repository
This is a regression since 1.13, we have opened a Jira ticket - https://issues.apache.org/jira/browse/DRILL-6762. Meanwhile, you can add custom udfs manually - https://drill.apache.org/docs/manually-adding-custom-functions-to-drill/.

task config not found

I am getting
task config 'get-snapshot-jar/infra/hw.yml
not found error. I have written a very simple pipeline .yml, this yml will connect to artifactory resource and run another yml which is defined in task section.
my pipeline.yml looks like:
resources:
- name: get-snapshot-jar
type: docker-image
source: <artifactory source>
repository: <artifactory repo>
username: {{artifactory-username}}
password: {{artifactory-password}}
jobs:
- name: create-artifact
plan:
- get: get-snapshot-jar
trigger: true
- task: copy-artifact-from-artifact-repo
file: get-snapshot-jar/infra/hw.yml
Artifactiory is working fine after that I am getting an error
enter image description here
copy-artifact-from-artifact-repo
task config 'get-snapshot-jar/infra/hw.yml' not found
You need to specify an input for your copy-artifact-from-artifact-repo task which passes the get-snapshot-jar resource to the tasks docker container. Take a look at this post where someone runs into a similar problem Trigger events in Concourse .
Also your file variable looks weird. Your are referencing a docker-image resource which, according to the official concourse-resource github repo, has no yml files inside.
Generally i would keep my task definitions as close as possible to the pipeline code. If you have to reach out to different repos you might loose the overview if your pipeline keeps growing.
cheers,

google Cloud spanner java.lang.IllegalArgumentException: Jetty ALPN/NPN has not been properly configured

I am new to the Google cloud Spanner and to explore it I started with documentation provided by google Here. To explore any database we start with data operations and the same I did, I started with writing data to the spanner using simple java application given here https://github.com/GoogleCloudPlatform/java-docs-samples/blob/master/spanner/cloud-client/src/main/java/com/example/spanner/SpannerSample.java. I have made changes in driver class on respective places shown in following code snippet:
public static void main(String[] args) throws Exception {
String path = "File_Path";
SpannerOptions.Builder options = SpannerOptions.newBuilder().setCredentials(GoogleCredentials.fromStream(new FileInputStream(path)));
options.setProjectId("Project_id");
Spanner spanner = (options.build()).getService();
try {
DatabaseId db = DatabaseId.of("project_id", "spannerInstance", "Database_name");
DatabaseClient dbClient = spanner.getDatabaseClient(db);
run(dbClient);
} finally {
spanner.closeAsync().get();
}
System.out.println("Closed client");
}
Now, When I am trying to execute the code I end up with following error:
Exception in thread "main" java.lang.IllegalArgumentException: Jetty ALPN/NPN has not been properly configured.
at io.grpc.netty.GrpcSslContexts.selectApplicationProtocolConfig(GrpcSslContexts.java:174)
at io.grpc.netty.GrpcSslContexts.configure(GrpcSslContexts.java:151)
at io.grpc.netty.GrpcSslContexts.configure(GrpcSslContexts.java:139)
at io.grpc.netty.GrpcSslContexts.forClient(GrpcSslContexts.java:109)
at com.google.cloud.spanner.SpannerOptions$NettyRpcChannelFactory.newSslContext(SpannerOptions.java:283)
at com.google.cloud.spanner.SpannerOptions$NettyRpcChannelFactory.newChannel(SpannerOptions.java:274)
at com.google.cloud.spanner.SpannerOptions.createChannel(SpannerOptions.java:253)
at com.google.cloud.spanner.SpannerOptions.createChannels(SpannerOptions.java:240)
at com.google.cloud.spanner.SpannerOptions.<init>(SpannerOptions.java:89)
at com.google.cloud.spanner.SpannerOptions.<init>(SpannerOptions.java:43)
at com.google.cloud.spanner.SpannerOptions$Builder.build(SpannerOptions.java:180)
while searching for this issue I have been suggest to add some dependencies like:
compile group: 'org.eclipse.jetty.alpn', name: 'alpn-api', version: '1.1.3.v20160715'
compile group: 'org.mortbay.jetty.alpn', name: 'jetty-alpn-agent', version: '2.0.6'
compile group: 'io.grpc', name: 'grpc-all', version: '1.2.0'
compile group: 'io.netty', name: 'netty-all', version: '4.0.29.Final'
compile group: 'org.eclipse.jetty.orbit', name: 'javax.servlet', version: '3.0.0.v201112011016'
but facing same issue, I am also using Bigquery and other GCP's feature one same working environment and they all are working fine except google-Spanner, any suggestion on this is appreciated. Thanks.
Please read the comments on the question, #Mairbek Khadikov and my discussion on this conclude the actual reason of the issue. As discussed in comment the actual problem was with another dependencies.
By adding
configurations {
compile.exclude module: 'netty-all'
}
to the build.gradle file this issue has resolved.
Here is the link of github issue I raised regarding to this error. github issue where I posted exact issue eventually which I got to know and the resolution of that by, '#michaelbausor'.

How to use YML files for a Mojito project?

It is said that Mojito can use JSON or YML as the application.json (the config file), but I haven't seen YML examples around?
For example, how to convert:
[
{
"settings": [ "master" ],
"specs": {
"hello" : {
"type" : "HelloWorldMojit"
}
}
}
]
to a YML file?
Also, when we use
$ mojito create app Hello
can't we specify that we want YML files as the default (instead of JSON files)?
Details:
I used npm's yamljs to convert the file to:
-
settings: [master]
specs: { hello: { type: HelloWorldMojit } }
and it doesn't work. And I edited it to
-
settings: [master]
specs:
hello:
type:
HelloWorldMojit
It won't work either. The server can start, but when the homepage was accessed, the error is:
error: (outputhandler.server): { [Error: Cannot expand instance [hello],
or instance.controller is undefined] code: 500 }
(the file routes.json is depending on hello being defined)
As of Mojito 0.5.2, YML is supported again. 0.5.1 and 0.5.0 do not support it.
We don't have archetypes with yaml, you will have to transform the files manually and renaming them. The good news, a more flexible archetypes infrastructure is on the making.
You should be ok with that configuration you pasted in the question, just use the latest version of mojito (0.5.x)

Scripted editing of Symfony 2 YAML file breaks the formatting and produces errors

I'm trying to script out our entire installation process of new Symfony 2.1 projects including adding and configuring all our bundles and their dependencies. The end result should be one command that sets up everything so the developer is both forced into our best practices setup, and does not have to spend time on this.
So far this is fairly successful since it is now possible to go from 0 to fully installed CMS in about an hour (mostly due to composer installs). You can see the result here: https://github.com/Kunstmaan/KunstmaanSandbox/blob/feature/update-to-2.1/README.md
The next fase of this project is modifying the Symfony config YAML files. But here I'm stuck.
For the parameters.yml I did this with a ruby script, here is the relevant extract, the full script can be found here: https://github.com/Kunstmaan/KunstmaanSandbox/blob/feature/update-to-2.1/app/Resources/docs/scripts/sandboxinstaller.rb
parametersymlpath = ARGV[1]
projectname = ARGV[2]
parametersyml = YAML.load_file(parametersymlpath)
params = parametersyml["parameters"]
params["searchport"] = 9200
params["searchindexname"] = projectname
params["sentry.dsn"] = "https://XXXXXXXX:XXXXXXXX#app.getsentry.com/XXXX"
params["cdnpath"] = ""
params["requiredlocales"] = "nl|fr|de|en"
params["defaultlocale"] = "nl"
params["websitetitle"] = projectname.capitalize
File.open(parametersymlpath, 'w') {|f| f.write(YAML.dump(parametersyml)) }
So far so good, but the same type of script fails on the config.yml due to these lines:
imports:
- { resource: #KunstmaanMediaBundle/Resources/config/config.yml }
- { resource: #KunstmaanAdminBundle/Resources/config/config.yml }
- { resource: #KunstmaanFormBundle/Resources/config/config.yml }
- { resource: #KunstmaanSearchBundle/Resources/config/config.yml }
- { resource: #KunstmaanAdminListBundle/Resources/config/config.yml }
The # is a reserved character according to the YAML spec and Ruby throws an error.
So I switched to php and the Symfony yaml component since at this point in the install there is a full symfony install and i came up with this standalone command: https://gist.github.com/3526251
But when reading and dumping the config.yml file the lines above for example would turn into
imports:
-
resource: #KunstmaanMediaBundle/Resources/config/config.yml
-
resource: #KunstmaanAdminBundle/Resources/config/config.yml
-
resource: #KunstmaanFormBundle/Resources/config/config.yml
-
resource: #KunstmaanSearchBundle/Resources/config/config.yml
-
resource: #KunstmaanAdminListBundle/Resources/config/config.yml
Which looks like crap and i'm not entirely sure this will even work.
So at this point i'm looking at moving the fully modified config.yml files to the install script and just overwriting the originals. I would rather not go there, since it will take constant maintenance if something changes in the symfony-standard project.
I'm wondering if there is another way?
These two forms are semantically equivalent. They are called the Inline and Indented Blocks, respectively.

Resources