Nomad Artifact download issue - nomad

Operating system
Windows 10
Working on nomad - 0.11.3
Nomad Java SDK - 0.11.3.0
Nomad runs as dev mode
I am trying to download git repo using nomad job. But getting the error after loading the repo in job's allocation folder.
Error ::
2 errors occurred:
* failed to parse config:
* Root value must be object: The root value in a JSON-based configuration must be either a JSON object or a JSON array of objects.
Job file (if appropriate)
{
"id": "get_git_job",
"name": "get_git_job",
"datacenters": [
"dc1"
],
"taskGroups": [
{
"name": "get_git_group",
"tasks": [
{
"name": "get_git_task",
"driver": "raw_exec",
"resources": {
"cpu": 500,
"memoryMb": 2000
},
"artifacts": [
{
"getterSource": "github.com/hashicorp/nomad",
"relativeDest": "local/repo"
}
],
"leader": false,
"shutdownDelay": 0
}
]
}
],
"dispatched": false
}
Nomad Client logs (if appropriate)
[INFO] client.alloc_runner.task_runner.task_hook.logmon.nomad.exe: opening fifo: alloc_id=10becf73-7abc-39c6-2114-38eea708103b task=get_git_task #module=logmon path=//./pipe/get_git_task-48748a1a.stdout timestamp=2020-12-02T16:32:33.755+0530
[DEBUG] client.alloc_runner.task_runner.task_hook.artifacts: downloading artifact: alloc_id=10becf73-7abc-39c6-2114-38eea708103b task=get_git_task artifact=github.com/hashicorp/nomad
[INFO] client.alloc_runner.task_runner.task_hook.logmon.nomad.exe: opening fifo: alloc_id=10becf73-7abc-39c6-2114-38eea708103b task=get_git_task #module=logmon path=//./pipe/get_git_task-48748a1a.stderr timestamp=2020-12-02T16:32:33.761+0530
[DEBUG] client: updated allocations: index=518 total=25 pulled=22 filtered=3
[DEBUG] client: allocation updates: added=0 removed=0 updated=22 ignored=3
[DEBUG] client: allocation updates applied: added=0 removed=0 updated=22 ignored=3 errors=0
[DEBUG] nomad.deployments_watcher: deadline hit: deployment_id=64d58e2c-d695-27a8-3daa-134d90e10807 job="<ns: "default", id: "get_git_job">" rollback=false
[DEBUG] worker: dequeued evaluation: eval_id=0aa4f715-be9c-91de-e1ed-a1d9b41093bc
[DEBUG] worker.service_sched: reconciled current state with desired state: eval_id=0aa4f715-be9c-91de-e1ed-a1d9b41093bc job_id=get_git_job namespace=default results="Total changes: (place 0) (destructive 0) (inplace 0) (stop 0)
Desired Changes for "get_git_group": (place 0) (inplace 0) (destructive 0) (stop 0) (migrate 0) (ignore 1) (canary 0)"
[DEBUG] worker.service_sched: setting eval status: eval_id=0aa4f715-be9c-91de-e1ed-a1d9b41093bc job_id=get_git_job namespace=default status=complete
[DEBUG] worker: updated evaluation: eval="<Eval "0aa4f715-be9c-91de-e1ed-a1d9b41093bc" JobID: "get_git_job" Namespace: "default">"
[DEBUG] worker: ack evaluation: eval_id=0aa4f715-be9c-91de-e1ed-a1d9b41093bc
[WARN] client.alloc_runner.task_runner: some environment variables not available for rendering: alloc_id=10becf73-7abc-39c6-2114-38eea708103b task=get_git_task keys=
[ERROR] client.alloc_runner.task_runner: running driver failed: alloc_id=10becf73-7abc-39c6-2114-38eea708103b task=get_git_task error="2 errors occurred:
* failed to parse config:
* Root value must be object: The root value in a JSON-based configuration must be either a JSON object or a JSON array of objects.
"
[INFO] client.alloc_runner.task_runner: not restarting task: alloc_id=10becf73-7abc-39c6-2114-38eea708103b task=get_git_task reason="Error was unrecoverable"
[INFO] client.gc: marking allocation for GC: alloc_id=10becf73-7abc-39c6-2114-38eea708103b
[DEBUG] nomad.client: adding evaluations for rescheduling failed allocations: num_evals=1
[DEBUG] worker: dequeued evaluation: eval_id=0490184c-d395-3e65-d38b-8dabd70b9b9d
[DEBUG] worker.service_sched: reconciled current state with desired state: eval_id=0490184c-d395-3e65-d38b-8dabd70b9b9d job_id=get_git_job namespace=default results="Total changes: (place 0) (destructive 0) (inplace 0) (stop 0)
anyone can help with this.

The question is resolved with the help of the Nomad team. And the solvation is that I need to add a command configuration Bcz of the driver is raw_exec.

Related

Disable preferReactor in Maven versions:update-properties

Is there way to bump versions in a multi-module project while ignoring the reactor's SNAPSHOT of something. The plugin seems very keen to use the local version. Config excludeReactor doesn't seem to be what I thought it was either.
mvn org.codehaus.mojo:versions-maven-plugin:2.12.0:update-properties -DallowMajorUpdates=false -DexcludeReactor=true
[DEBUG] Configuring mojo 'org.codehaus.mojo:versions-maven-plugin:2.12.0:update-properties' with basic configurator -->
[DEBUG] (f) allowDowngrade = false
[DEBUG] (f) allowIncrementalUpdates = true
[DEBUG] (f) allowMajorUpdates = false
[DEBUG] (f) allowMinorUpdates = true
[DEBUG] (f) allowSnapshots = false <<<<
[DEBUG] (f) autoLinkItems = true
[DEBUG] (f) changeRecorderFormat = none
[DEBUG] (f) excludeReactor = true <<<<
...
[DEBUG] Property ${bom.version}: Current winner is: 1.0.185
[DEBUG] Property ${bom.version}: Searching reactor for a valid version...
[DEBUG] Property ${bom.version}: Set of valid available versions from the reactor is [1.0.185-SNAPSHOT]
[DEBUG] Property ${bom.version}: Reactor has version 1.0.185-SNAPSHOT
[DEBUG] Property ${bom.version}: Reactor has a version and we prefer the reactor
[INFO] Updated ${bom.version} from 1.0.181 to 1.0.185-SNAPSHOT <<<< :(

Gatling 3.4 bug with timeout after silent check

I experience a very weird behavior of Gatling with websocket silent check:
The .await(600 seconds)(check) fails with timeout after some milliseconds.
At first I explain my situation. The limitation of Gatling websocket does not allow to handle ping requests from server. So I have to cheat and invented a fancy protocol. The code with comments is below:
client may send an INITIAL event or NON-INITIAL event (not first in a sequence).
the minimum interval between INITIAL events is 3 seconds
each request initialized by client results with 2 responses: "calculation started" and "calculation result"
ping request from server may come at any time. When we pause between client events, we still may receive ping request
when we receive a ping request, we must respond.
the usual sequence of events does not depend on ping, so if ping arrived, then we must wait for one more message
exec(session => dump(session, s"The spin action: event=$eventType oneRound=$oneRound" )).
exec(_.remove(ATTR_PING_SEQ_ID))
.doIfOrElse("CLIENT_INITIAL_EVENT".equals(eventType)) {
exec(session => dump(session, s"Sending CLIENT_INITIAL_EVENT and expect 2 or 3 responses. 3 responses mean that one of them is a ping. wait for each response for ${Config.waitForResponseSec} seconds" )).
exec(
clientActionBuilder
// first response: "started calculation"
.await(Config.waitForResponseSec seconds)(check1)
// second response: "calculated result"
.await(Config.waitForResponseSec seconds)(check2)
// wait for the delay between client initial events.
// We cannot just wait because PING may come within this time and we must handle it!
// Most probably the ping will not come, so we ignore the timeout and make the check silent
.await(Config.minTimeBetweenClientInitialEventsMillis milliseconds)(check3.silent)
)
.exec(session => dump(session, s"SPIN 2 or 3 responses got" ))
// we waited for 3 messages, so if the ping came, we just send pong and do not wait for anything else
.doIf(session => Utils.getStringSessionAttribute(session, ATTR_PING_SEQ_ID, "0") != "0"){
exec(session => dump(session, s"Sending PONG for CLIENT_INITIAL_EVENT" )).
exec(pongBuilder)
}
} {
exec(session => dump(session, s"Sending NON-INITIAL_CLIENT_EVENT and expect 2 responses" )).
exec(
clientActionBuilder
// first response: "started calculation"
.await(Config.waitForResponseSec seconds)(check4)
// second response: "calculation result"
.await(Config.waitForResponseSec seconds)(check5)
)
.exec(session => dump(session, s"NON-INITIAL_CLIENT_EVENT 2 responses got" ))
// we waited for 2 messages. If ping came, then it came instead of a
// "started calculation" or "calculation result", so we have to wait for one more message
.doIf(session => Utils.getStringSessionAttribute(session, ATTR_PING_SEQ_ID, "0") != "0"){
exec(session => dump(session, s"Sending PONG for NON-INITIAL_CLIENT_EVENT" )).
exec(pongBuilder.await(Config.waitForResponseSec seconds)(check6))
.exec(session => dump(session, s"NON-INITIAL_CLIENT_EVENT PONG response got" ))
}
}
.exitHereIfFailed
.exec(_.remove(ATTR_PING_SEQ_ID))
The check for each message is the same. It is cloned because I want to see in logs which concrete check has timed out:
val checkX = ws.checkTextMessage("myCheckX")
.matching(jsonPath(matchingCondition).exists)
jsonPath("$.body.data.nextActions[0]").optional.saveAs(ATTR_NEXT_ACTION),
).check(regex("\"cId\":(.*?),(\"name\":\"Ping\")").optional.saveAs(ATTR_PING_SEQ_ID))
the actual messages are very simple:
val clientActionBuilder = ws("requestClientAction").sendText(
"""{
| "header":
| {
| "name": "Action",
| "cId": ${cId},
| "dType": 2
| },
| "body":
| {
| "type": "#TYPE#",
| "seqId":${seqId},
| "data":{
| }
| }
|}
""".stripMargin.replaceAll("[\\s\n\r]", "").replace("#TYPE#", eventType)
)
val pongBuilder = ws("requestPong").sendText(
"""{
| "header":
| {
| "name": "Ping",
| "cId": ${pingSeqId},
| "dType": 1
| },
| "body": {}
|}
""".stripMargin.replaceAll("[\\s\n\r]", "")
)
the client actions are sent in a loop untill timeout:
asLongAs(
session =>
!timeoutIsOver(startTime, testDurationMillis)
) {
exec(doClientAction())
}
The logic works as expected until the ping request comes from server. After that the ws await timeout breaks. Here is what I see in logs:
DUMP---> The client action: event=INITIAL_EVENT oneRound=false
DUMP---> Sending CLIENT_INITIAL_EVENT and expect 2 or 3 responses. wait for each response for 600 seconds
20:33:18.772 [INFO ] i.g.h.a.w.WsSendTextFrame - Sending text frame {"header":{"name":"Action","cId":103,"dType":2},"body":{}} with websocket 'gatling.http.webSocket': Scenario 'doUntilTimeout', UserId #1
20:33:18.773 [DEBUG] i.g.h.a.w.f.WsIdleState - Send text frame requestClientAction {"header":{"name":"Action","cId":103,"dType":2},"body":{}}
20:33:18.773 [DEBUG] i.g.h.c.i.WebSocketHandler - ctx.write msg=TextWebSocketFrame(data: UnpooledByteBufAllocator$InstrumentedUnpooledUnsafeHeapByteBuf(ridx: 0, widx: 164, cap: 512))
20:33:18.773 [TRACE] i.n.h.c.h.w.WebSocket08FrameEncoder - Encoding WebSocket Frame opCode=1 length=164
20:33:18.773 [DEBUG] i.g.h.a.w.f.WsIdleState - Trigger check after sending text frame
20:33:18.787 [TRACE] i.n.h.c.h.w.WebSocket08FrameDecoder - Decoding WebSocket Frame opCode=1
20:33:18.787 [TRACE] i.n.h.c.h.w.WebSocket08FrameDecoder - Decoding WebSocket Frame length=80
20:33:18.787 [DEBUG] i.g.h.c.i.WebSocketHandler - Read msg=TextWebSocketFrame(data: PooledUnsafeDirectByteBuf(ridx: 0, widx: 80, cap: 80))
20:33:18.788 [DEBUG] i.g.h.a.w.f.WsPerformingCheckState - Received matching message {"header":{"cId":103,"name":"ClientAction","code":1,"dType":2}}
20:33:18.789 [DEBUG] i.g.h.a.w.f.WsPerformingCheckState - Current check success
20:33:18.789 [DEBUG] i.g.h.a.w.f.WsPerformingCheckState - Perform next check sequence
20:33:19.233 [TRACE] i.n.h.c.h.w.WebSocket08FrameDecoder - Decoding WebSocket Frame opCode=1
20:33:19.233 [TRACE] i.n.h.c.h.w.WebSocket08FrameDecoder - Decoding WebSocket Frame length=1480
20:33:19.233 [DEBUG] i.g.h.c.i.WebSocketHandler - Read msg=TextWebSocketFrame(data: PooledUnsafeDirectByteBuf(ridx: 0, widx: 1480, cap: 1480))
20:33:19.235 [DEBUG] i.g.h.a.w.f.WsPerformingCheckState - Received matching message {"header":{"cId":37,"name":"ClientEvent","dType":2,"dId":1270},"body":{...}}
20:33:19.237 [DEBUG] i.g.h.a.w.f.WsPerformingCheckState - Current check success
20:33:19.238 [DEBUG] i.g.h.a.w.f.WsPerformingCheckState - Perform next check sequence
20:33:20.871 [TRACE] i.n.h.c.h.w.WebSocket08FrameDecoder - Decoding WebSocket Frame opCode=1
20:33:20.871 [TRACE] i.n.h.c.h.w.WebSocket08FrameDecoder - Decoding WebSocket Frame length=65
20:33:20.871 [DEBUG] i.g.h.c.i.WebSocketHandler - Read msg=TextWebSocketFrame(data: PooledUnsafeDirectByteBuf(ridx: 0, widx: 65, cap: 65))
20:33:20.872 [DEBUG] i.g.h.a.w.f.WsPerformingCheckState - Received matching message {"header":{"cId":38,"name":"Ping","dType":2}}
20:33:20.872 [DEBUG] i.g.h.a.w.f.WsPerformingCheckState - Current check success
20:33:20.872 [DEBUG] i.g.h.a.w.f.WsPerformingCheckState - Check sequences completed successfully
DUMP---> 2 or 3 responses got
DUMP---> Sending PONG for CLIENT_INITIAL_EVENT
20:33:20.873 [INFO ] i.g.h.a.w.WsSendTextFrame - Sending text frame {"header":{"name":"Ping","cId":38,"dType":1},"body":{}} with websocket 'gatling.http.webSocket': Scenario 'doUntilTimeout', UserId #1
20:33:20.873 [DEBUG] i.g.h.a.w.f.WsIdleState - Send text frame requestPong {"header":{"name":"Ping","cId":38,"dType":1},"body":{}}
20:33:20.873 [DEBUG] i.g.h.c.i.WebSocketHandler - ctx.write msg=TextWebSocketFrame(data: UnpooledByteBufAllocator$InstrumentedUnpooledUnsafeHeapByteBuf(ridx: 0, widx: 65, cap: 256))
20:33:20.873 [TRACE] i.n.h.c.h.w.WebSocket08FrameEncoder - Encoding WebSocket Frame opCode=1 length=65
....
20:33:20.876 [INFO ] i.g.h.a.w.WsSendTextFrame - Sending text frame {"header":{"name":"ClientAction","cId":104,"dType":2},"body":{"type":"NON-INITIAL_CLIENT_EVENT","seqId":304,"data":{...}}} with websocket 'gatling.http.webSocket': Scenario 'doUntilTimeout', UserId #1
20:33:20.876 [DEBUG] i.g.h.a.w.f.WsIdleState - Send text frame requestClientAction {"header":{"name":"CLientAction","cId":104,"dType":2},"body":{"type":"NON-INITIAL_CLIENT_EVENT","seqId":304,"data":{...}}}
20:33:20.876 [DEBUG] i.g.h.c.i.WebSocketHandler - ctx.write msg=TextWebSocketFrame(data: UnpooledByteBufAllocator$InstrumentedUnpooledUnsafeHeapByteBuf(ridx: 0, widx: 167, cap: 512))
20:33:20.876 [TRACE] i.n.h.c.h.w.WebSocket08FrameEncoder - Encoding WebSocket Frame opCode=1 length=167
20:33:20.877 [DEBUG] i.g.h.a.w.f.WsIdleState - Trigger check after sending text frame
20:33:20.897 [TRACE] i.n.h.c.h.w.WebSocket08FrameDecoder - Decoding WebSocket Frame opCode=1
20:33:20.897 [TRACE] i.n.h.c.h.w.WebSocket08FrameDecoder - Decoding WebSocket Frame length=81
20:33:20.897 [DEBUG] i.g.h.c.i.WebSocketHandler - Read msg=TextWebSocketFrame(data: PooledUnsafeDirectByteBuf(ridx: 0, widx: 81, cap: 81))
20:33:20.898 [DEBUG] i.g.h.a.w.f.WsPerformingCheckState - Received matching message {"header":{"cId":104,"name":"ClientAction","code":1,"dType":2}}
20:33:20.899 [DEBUG] i.g.h.a.w.f.WsPerformingCheckState - Current check success
20:33:20.899 [DEBUG] i.g.h.a.w.f.WsPerformingCheckState - Perform next check sequence
....
20:33:21.535 [TRACE] i.n.h.c.h.w.WebSocket08FrameDecoder - Decoding WebSocket Frame opCode=1
20:33:21.535 [TRACE] i.n.h.c.h.w.WebSocket08FrameDecoder - Decoding WebSocket Frame length=2275
20:33:21.535 [DEBUG] i.g.h.c.i.WebSocketHandler - Read msg=TextWebSocketFrame(data: PooledUnsafeDirectByteBuf(ridx: 0, widx: 2275, cap: 2275))
20:33:21.537 [DEBUG] i.g.h.a.w.f.WsPerformingCheckState - Received matching message {"header":{"cId":39,"name":"ClientEvent","dType":2},"body":{...}}
20:33:21.540 [DEBUG] i.g.h.a.w.f.WsPerformingCheckState - Current check success
20:33:21.540 [DEBUG] i.g.h.a.w.f.WsPerformingCheckState - Check sequences completed successfully
DUMP---> NON-INITIAL_CLIENT_EVENT 2 responses got
....
DUMP---> Sending CLIENT_INITIAL_EVENT and expect 2 or 3 responses. wait for each response for 600 seconds
20:33:21.542 [INFO ] i.g.h.a.w.WsSendTextFrame - Sending text frame {"header":{"name":"ClientAction","cId":105,"dType":2},"body":{...}}} with websocket 'gatling.http.webSocket': Scenario 'doUntilTimeout', UserId #1
20:33:21.542 [DEBUG] i.g.h.a.w.f.WsIdleState - Send text frame requestClientAction {"header":{"name":"ClientAction","cId":105,"dType":2},"body":{...}}}
20:33:21.542 [DEBUG] i.g.h.c.i.WebSocketHandler - ctx.write msg=TextWebSocketFrame(data: UnpooledByteBufAllocator$InstrumentedUnpooledUnsafeHeapByteBuf(ridx: 0, widx: 164, cap: 512))
20:33:21.542 [TRACE] i.n.h.c.h.w.WebSocket08FrameEncoder - Encoding WebSocket Frame opCode=1 length=164
20:33:21.543 [DEBUG] i.g.h.a.w.f.WsIdleState - Trigger check after sending text frame
20:33:21.558 [TRACE] i.n.h.c.h.w.WebSocket08FrameDecoder - Decoding WebSocket Frame opCode=1
20:33:21.559 [TRACE] i.n.h.c.h.w.WebSocket08FrameDecoder - Decoding WebSocket Frame length=81
20:33:21.559 [DEBUG] i.g.h.c.i.WebSocketHandler - Read msg=TextWebSocketFrame(data: PooledUnsafeDirectByteBuf(ridx: 0, widx: 81, cap: 81))
20:33:21.560 [DEBUG] i.g.h.a.w.f.WsPerformingCheckState - Received matching message {"header":,"cId":105,"name":"ClientAction","code":1,"dType":2}}
20:33:21.560 [DEBUG] i.g.h.a.w.f.WsPerformingCheckState - Current check success
20:33:21.561 [DEBUG] i.g.h.a.w.f.WsPerformingCheckState - Perform next check sequence
20:33:21.742 [DEBUG] i.g.h.a.w.f.WsPerformingCheckState - Check timeout
20:33:21.743 [DEBUG] i.g.h.a.w.f.WsPerformingCheckState - Check timeout, failing it and performing next action
DUMP---> 2 or 3 responses got
20:33:21.744 [DEBUG] i.g.c.a.Exit - End user #1
20:33:21.748 [DEBUG] i.g.c.c.i.Injector - End user #doUntilTimeout
20:33:21.748 [INFO ] i.g.c.c.i.Injector - All users of scenario doUntilTimeout are stopped
20:33:21.749 [INFO ] i.g.c.c.i.Injector - Stopping
20:33:21.749 [INFO ] i.g.c.c.Controller - Injector has stopped, initiating graceful stop
I received the first message from web socket at 20:33:21.560
Then the second "await" started. It should timeout after 600 seconds,
but in fact I see the timeout right away at 20:33:21.743
I looks like a bug in Gatling. Something like timeout property reset to zero
Thanks in advance!
Andrei

How do I connect to iccube using Snowflake?

after copying the latest version of the Snowflake driver in to the lib folder of iccube, starting the server and then performing the following:
Schema create - Wizard (Dimensions/Measures -> Table)
Relational Database
Connection details....
Driver Type: JDBC
Server Name:
net.snowflake.client.jdbc.SnowflakeDriver
DB Name:
jdbc:snowflake://xxx-eu-west-1.snowflakecomputing.com
User: dummy
Password: xxx
I get the following error.
[ qtp525575644-48] [DEBUG] (13:21:33.986 UTC) [R] GWT 20 servlet-started
[ qtp525575644-48] [DEBUG] (13:21:34.031 UTC) [R] GWT 20 request-process-started [session:node0s0rjncom0tmx12mojb0y00nl60] OTHER (schema:none) GwtDiscoverTableNamesQuery cl_GWT_GwtDiscoverTableNamesQuery_1546953693969_1151490167
[ qtp525575644-48] [DEBUG] (13:21:34.031 UTC) [R] GWT 20 submit-tasks-started 1 q:0 t:0/8
[ qtp525575644-48] [DEBUG] (13:21:34.031 UTC) [R] GWT 20 submit-task-started GWT
[ qtp525575644-48] [DEBUG] (13:21:34.032 UTC) [R] GWT 20 execute-task-started GWT [LOCK:none]
[ qtp525575644-48] [DEBUG] (13:21:34.034 UTC) [JDBC] creating a new OLAP connection [780055920]
[ qtp525575644-48] [DEBUG] (13:21:34.065 UTC) [JDBC] opening a new DB connection [780055920]
[ qtp525575644-48] [DEBUG] (13:21:34.065 UTC) [JDBC] Postgres URL [-] [net.snowflake.client.jdbc.SnowflakeDriver] [null] [jdbc:snowflake://xxx.eu-west-1.snowflakecomputing.com]
[ gc] [ WARN] (13:21:34.339 UTC) [GC] (PS Scavenge) : 14ms ( free:174MB / total:227MB / max:456MB )
[ qtp525575644-48] [DEBUG] (13:21:36.640 UTC) [JDBC] closing the DB connection [780055920]
[ qtp525575644-48] [ERROR] (13:21:37.119 UTC) [builder] validation error(s)
[BUILDER_JDBC_CONNECTION_CANNOT_BE_CREATED] JDBC connection for url 'jdbc:snowflake://xxx.eu-west-1.snowflakecomputing.com' and user 'pentaho_reporting' cannot be created due to error 'null'
at crazydev.iccube.builder.datasource.jdbc.OlapBuilderJdbcConnection.onOpen(SourceFile:110)
at crazydev.iccube.builder.datasource.OlapBuilderAbstractConnection.open(SourceFile:73)
at crazydev.iccube.gwt.server.requesthandler.builder.handlers.datatable.GwtDiscoverTableNamesQueryHandler.doHandleImpl(SourceFile:65)
at crazydev.iccube.gwt.server.requesthandler.builder.handlers.datatable.GwtDiscoverTableNamesQueryHandler.doHandleImpl(SourceFile:29)
at crazydev.iccube.gwt.server.requesthandler.builder.handlers.common.GwtAbstractBuilderQueryHandler.unsafeHandleImpl(SourceFile:239)
at crazydev.iccube.gwt.server.requesthandler.builder.handlers.common.GwtAbstractBuilderQueryHandler.safeHandleImpl(SourceFile:186)
at crazydev.iccube.gwt.server.requesthandler.builder.handlers.common.GwtAbstractBuilderQueryHandler.handleImpl(SourceFile:178)
at crazydev.iccube.gwt.server.requesthandler.builder.handlers.common.GwtAbstractBuilderQueryHandler.handleImpl(SourceFile:70)
at crazydev.iccube.gwt.server.requesthandler.common.GwtAbstractQueryHandler.handle(SourceFile:75)
at crazydev.iccube.gwt.server.requesthandler.common.GwtAbstractQueryHandler.handle(SourceFile:58)
at crazydev.iccube.gwt.server.requesthandler.common.GwtQueryHandlerDispatcher.dispatchQuery(SourceFile:528)
at crazydev.iccube.server.request.request.gwt.IcCubeGwtServerRequest$Task.unsafeExecute(SourceFile:629)
at crazydev.iccube.server.request.task.IcCubeServerTask.execute(SourceFile:247)
at crazydev.iccube.server.request.executor.IcCubeServerTaskRunnable.run(SourceFile:42)
Snowflake jdbc driver throws a 'SQLFeatureNotSupportedException' with an empty message when calling setReadOnly to the connection.
We fixed this in our dev branch and will be available in the next release or as a pre-release.
PS: Discovering tables doesn't work very well, as a workaround you might add SQL queries as tables.

Gradle transforms https maven repository to http 443 request

My build.gradle is configured as:
repositories {
mavenLocal()
mavenCentral()
jcenter()
maven {
url "https://<myrepo>/repo"
}
}
However,
$ gradle build --debug
gives me:
[...]
12:01:58.487 [DEBUG] [org.gradle.api.internal.artifacts.ivyservice.IvyLoggingAdaper] setting 'https.proxyHost' to '<myrepo>'
[...]
12:01:59.070 [DEBUG] [org.gradle.internal.resource.transport.http.HttpClientHelper] Performing HTTP GET: https://repo1.maven.org/maven2/org/xbib/archive/maven-metadata.xml
12:01:59.316 [DEBUG] [org.apache.http.client.protocol.RequestAddCookies] CookieSpec selected: default
12:01:59.324 [DEBUG] [org.apache.http.client.protocol.RequestAuthCache] Auth cache not set in the context
12:01:59.325 [DEBUG] [org.apache.http.impl.conn.PoolingHttpClientConnectionManager] Connection request: [route: {tls}->http://<myrepo>:443->https://repo1.maven.org:443][total kept alive: 0; route allocated: 0 of 2; total allocated: 0 of 20]
12:01:59.336 [DEBUG] [org.apache.http.impl.conn.PoolingHttpClientConnectionManager] Connection leased: [id: 0][route: {tls}->http://<myrepo>:443->https://repo1.maven.org:443][total kept alive: 0; route allocated: 1 of 2; total allocated: 1 of 20]
12:01:59.337 [DEBUG] [org.apache.http.impl.execchain.MainClientExec] Opening connection {tls}->http://<myrepo>:443->https://repo1.maven.org:443
12:01:59.340 [DEBUG] [org.apache.http.impl.conn.DefaultHttpClientConnectionOperator] Connecting to <myrepo>/<reposerverIP>:443
12:01:59.342 [DEBUG] [org.apache.http.impl.conn.DefaultHttpClientConnectionOperator] Connection established <localIP>:49298<-><reposerverIP>:443
12:01:59.346 [DEBUG] [org.apache.http.impl.conn.DefaultHttpResponseParser] Garbage in response:
[...]
12:01:59.347 [DEBUG] [org.apache.http.impl.conn.DefaultManagedHttpClientConnection] http-outgoing-0: Close connection
12:01:59.347 [DEBUG] [org.apache.http.impl.conn.DefaultManagedHttpClientConnection] http-outgoing-0: Shutdown connection
12:01:59.348 [DEBUG] [org.apache.http.impl.execchain.MainClientExec] Connection discarded
12:01:59.348 [DEBUG] [org.apache.http.impl.conn.DefaultManagedHttpClientConnection] http-outgoing-0: Close connection
[...]
...though I don't know, why Gradle feels motivated to transform "https" configuration into "http: ... :443". Anyone having a configuration idea?
As I wasn't able to find the configuration error itself, I am happy to have the problem solved by simply
uninstalling Gradle completely
restarting Ubuntu and
installing Gradle 2.14 again.

Apache Storm - Kinesis Spout throwing AmazonClientException backing off

2016-02-02 16:15:18 c.a.s.k.s.u.InfiniteConstantBackoffRetry [DEBUG] Caught exception of type com.amazonaws.AmazonClientException, backing off for 1000 ms.
I tested GET and PUT using Streams and Get requests - both worked flawless. I have all 3 variants Batch, Storm and Spark. Spark - used KinesisStreams - working Batch: Can you Get and Put - working Storm: planning to use KinesisSpout library from Kinesis. It is failing with no clue.
final KinesisSpoutConfig config = new KinesisSpoutConfig(streamname, zookeeperurl);
config.withInitialPositionInStream(ipis);
config.withRegion(Regions.fromName(regionName));
config.withCheckpointIntervalMillis(Integer.parseInt(checkinterval));
config.withZookeeperPrefix("kinesis-zooprefix-" + name);
System.setProperty("aws.accessKeyId", key);
System.setProperty("aws.secretKey", keysecret);
SystemPropertiesCredentialsProvider scp = new SystemPropertiesCredentialsProvider();
final KinesisSpout spout = new KinesisSpoutConflux(config, scp, new ClientConfiguration());
What am I doing wrong?
Storm Logs:
2016-02-02 16:15:17 c.a.s.k.s.KinesisSpout [INFO] KinesisSpoutConflux[taskIndex=0] open() called with topoConfig task index 0 for processing stream Kinesis-Conflux
2016-02-02 16:15:17 c.a.s.k.s.KinesisSpout [DEBUG] KinesisSpoutConflux[taskIndex=0] activating. Starting to process stream Kinesis-Test
2016-02-02 16:15:17 c.a.s.k.s.KinesisHelper [INFO] Using us-east-1 region
I don't see "nextTuple" getting called.
My Versions:
storm = 0.9.3
kinesis-storm-spout = 1.1.1

Resources