How do I connect to iccube using Snowflake? - jdbc

after copying the latest version of the Snowflake driver in to the lib folder of iccube, starting the server and then performing the following:
Schema create - Wizard (Dimensions/Measures -> Table)
Relational Database
Connection details....
Driver Type: JDBC
Server Name:
net.snowflake.client.jdbc.SnowflakeDriver
DB Name:
jdbc:snowflake://xxx-eu-west-1.snowflakecomputing.com
User: dummy
Password: xxx
I get the following error.
[ qtp525575644-48] [DEBUG] (13:21:33.986 UTC) [R] GWT 20 servlet-started
[ qtp525575644-48] [DEBUG] (13:21:34.031 UTC) [R] GWT 20 request-process-started [session:node0s0rjncom0tmx12mojb0y00nl60] OTHER (schema:none) GwtDiscoverTableNamesQuery cl_GWT_GwtDiscoverTableNamesQuery_1546953693969_1151490167
[ qtp525575644-48] [DEBUG] (13:21:34.031 UTC) [R] GWT 20 submit-tasks-started 1 q:0 t:0/8
[ qtp525575644-48] [DEBUG] (13:21:34.031 UTC) [R] GWT 20 submit-task-started GWT
[ qtp525575644-48] [DEBUG] (13:21:34.032 UTC) [R] GWT 20 execute-task-started GWT [LOCK:none]
[ qtp525575644-48] [DEBUG] (13:21:34.034 UTC) [JDBC] creating a new OLAP connection [780055920]
[ qtp525575644-48] [DEBUG] (13:21:34.065 UTC) [JDBC] opening a new DB connection [780055920]
[ qtp525575644-48] [DEBUG] (13:21:34.065 UTC) [JDBC] Postgres URL [-] [net.snowflake.client.jdbc.SnowflakeDriver] [null] [jdbc:snowflake://xxx.eu-west-1.snowflakecomputing.com]
[ gc] [ WARN] (13:21:34.339 UTC) [GC] (PS Scavenge) : 14ms ( free:174MB / total:227MB / max:456MB )
[ qtp525575644-48] [DEBUG] (13:21:36.640 UTC) [JDBC] closing the DB connection [780055920]
[ qtp525575644-48] [ERROR] (13:21:37.119 UTC) [builder] validation error(s)
[BUILDER_JDBC_CONNECTION_CANNOT_BE_CREATED] JDBC connection for url 'jdbc:snowflake://xxx.eu-west-1.snowflakecomputing.com' and user 'pentaho_reporting' cannot be created due to error 'null'
at crazydev.iccube.builder.datasource.jdbc.OlapBuilderJdbcConnection.onOpen(SourceFile:110)
at crazydev.iccube.builder.datasource.OlapBuilderAbstractConnection.open(SourceFile:73)
at crazydev.iccube.gwt.server.requesthandler.builder.handlers.datatable.GwtDiscoverTableNamesQueryHandler.doHandleImpl(SourceFile:65)
at crazydev.iccube.gwt.server.requesthandler.builder.handlers.datatable.GwtDiscoverTableNamesQueryHandler.doHandleImpl(SourceFile:29)
at crazydev.iccube.gwt.server.requesthandler.builder.handlers.common.GwtAbstractBuilderQueryHandler.unsafeHandleImpl(SourceFile:239)
at crazydev.iccube.gwt.server.requesthandler.builder.handlers.common.GwtAbstractBuilderQueryHandler.safeHandleImpl(SourceFile:186)
at crazydev.iccube.gwt.server.requesthandler.builder.handlers.common.GwtAbstractBuilderQueryHandler.handleImpl(SourceFile:178)
at crazydev.iccube.gwt.server.requesthandler.builder.handlers.common.GwtAbstractBuilderQueryHandler.handleImpl(SourceFile:70)
at crazydev.iccube.gwt.server.requesthandler.common.GwtAbstractQueryHandler.handle(SourceFile:75)
at crazydev.iccube.gwt.server.requesthandler.common.GwtAbstractQueryHandler.handle(SourceFile:58)
at crazydev.iccube.gwt.server.requesthandler.common.GwtQueryHandlerDispatcher.dispatchQuery(SourceFile:528)
at crazydev.iccube.server.request.request.gwt.IcCubeGwtServerRequest$Task.unsafeExecute(SourceFile:629)
at crazydev.iccube.server.request.task.IcCubeServerTask.execute(SourceFile:247)
at crazydev.iccube.server.request.executor.IcCubeServerTaskRunnable.run(SourceFile:42)

Snowflake jdbc driver throws a 'SQLFeatureNotSupportedException' with an empty message when calling setReadOnly to the connection.
We fixed this in our dev branch and will be available in the next release or as a pre-release.
PS: Discovering tables doesn't work very well, as a workaround you might add SQL queries as tables.

Related

Nomad Artifact download issue

Operating system
Windows 10
Working on nomad - 0.11.3
Nomad Java SDK - 0.11.3.0
Nomad runs as dev mode
I am trying to download git repo using nomad job. But getting the error after loading the repo in job's allocation folder.
Error ::
2 errors occurred:
* failed to parse config:
* Root value must be object: The root value in a JSON-based configuration must be either a JSON object or a JSON array of objects.
Job file (if appropriate)
{
"id": "get_git_job",
"name": "get_git_job",
"datacenters": [
"dc1"
],
"taskGroups": [
{
"name": "get_git_group",
"tasks": [
{
"name": "get_git_task",
"driver": "raw_exec",
"resources": {
"cpu": 500,
"memoryMb": 2000
},
"artifacts": [
{
"getterSource": "github.com/hashicorp/nomad",
"relativeDest": "local/repo"
}
],
"leader": false,
"shutdownDelay": 0
}
]
}
],
"dispatched": false
}
Nomad Client logs (if appropriate)
[INFO] client.alloc_runner.task_runner.task_hook.logmon.nomad.exe: opening fifo: alloc_id=10becf73-7abc-39c6-2114-38eea708103b task=get_git_task #module=logmon path=//./pipe/get_git_task-48748a1a.stdout timestamp=2020-12-02T16:32:33.755+0530
[DEBUG] client.alloc_runner.task_runner.task_hook.artifacts: downloading artifact: alloc_id=10becf73-7abc-39c6-2114-38eea708103b task=get_git_task artifact=github.com/hashicorp/nomad
[INFO] client.alloc_runner.task_runner.task_hook.logmon.nomad.exe: opening fifo: alloc_id=10becf73-7abc-39c6-2114-38eea708103b task=get_git_task #module=logmon path=//./pipe/get_git_task-48748a1a.stderr timestamp=2020-12-02T16:32:33.761+0530
[DEBUG] client: updated allocations: index=518 total=25 pulled=22 filtered=3
[DEBUG] client: allocation updates: added=0 removed=0 updated=22 ignored=3
[DEBUG] client: allocation updates applied: added=0 removed=0 updated=22 ignored=3 errors=0
[DEBUG] nomad.deployments_watcher: deadline hit: deployment_id=64d58e2c-d695-27a8-3daa-134d90e10807 job="<ns: "default", id: "get_git_job">" rollback=false
[DEBUG] worker: dequeued evaluation: eval_id=0aa4f715-be9c-91de-e1ed-a1d9b41093bc
[DEBUG] worker.service_sched: reconciled current state with desired state: eval_id=0aa4f715-be9c-91de-e1ed-a1d9b41093bc job_id=get_git_job namespace=default results="Total changes: (place 0) (destructive 0) (inplace 0) (stop 0)
Desired Changes for "get_git_group": (place 0) (inplace 0) (destructive 0) (stop 0) (migrate 0) (ignore 1) (canary 0)"
[DEBUG] worker.service_sched: setting eval status: eval_id=0aa4f715-be9c-91de-e1ed-a1d9b41093bc job_id=get_git_job namespace=default status=complete
[DEBUG] worker: updated evaluation: eval="<Eval "0aa4f715-be9c-91de-e1ed-a1d9b41093bc" JobID: "get_git_job" Namespace: "default">"
[DEBUG] worker: ack evaluation: eval_id=0aa4f715-be9c-91de-e1ed-a1d9b41093bc
[WARN] client.alloc_runner.task_runner: some environment variables not available for rendering: alloc_id=10becf73-7abc-39c6-2114-38eea708103b task=get_git_task keys=
[ERROR] client.alloc_runner.task_runner: running driver failed: alloc_id=10becf73-7abc-39c6-2114-38eea708103b task=get_git_task error="2 errors occurred:
* failed to parse config:
* Root value must be object: The root value in a JSON-based configuration must be either a JSON object or a JSON array of objects.
"
[INFO] client.alloc_runner.task_runner: not restarting task: alloc_id=10becf73-7abc-39c6-2114-38eea708103b task=get_git_task reason="Error was unrecoverable"
[INFO] client.gc: marking allocation for GC: alloc_id=10becf73-7abc-39c6-2114-38eea708103b
[DEBUG] nomad.client: adding evaluations for rescheduling failed allocations: num_evals=1
[DEBUG] worker: dequeued evaluation: eval_id=0490184c-d395-3e65-d38b-8dabd70b9b9d
[DEBUG] worker.service_sched: reconciled current state with desired state: eval_id=0490184c-d395-3e65-d38b-8dabd70b9b9d job_id=get_git_job namespace=default results="Total changes: (place 0) (destructive 0) (inplace 0) (stop 0)
anyone can help with this.
The question is resolved with the help of the Nomad team. And the solvation is that I need to add a command configuration Bcz of the driver is raw_exec.

How to solve MQJE001: Completion Code '2', Reason '2085'

I am writing to an MQ queue from Java and I am intermittently get the error response below. I am using IBM MQ version 9.
What could be the cause of this as its intermittent and the queue / queue manager being written to exists and was running during this time.
[INFO ] 2020-06-13 22:48:03.752+0300 [main] [e5643f16-94ea-436f-ad71-54bee1c91381] MQWriteFile - Finished establishing a connection to DB
[INFO ] 2020-06-13 22:48:03.752+0300 [main] [e5643f16-94ea-436f-ad71-54bee1c91381] MQWriteFile - init
[INFO ] 2020-06-13 22:48:03.758+0300 [main] [e5643f16-94ea-436f-ad71-54bee1c91381] MQWriteFile - 5. Before calling write.selectQMgr()
[INFO ] 2020-06-13 22:48:03.864+0300 [main] [e5643f16-94ea-436f-ad71-54bee1c91381] MQWriteFile - 6. After selecting Queue Manager name
[DEBUG] 2020-06-13 22:48:03.876+0300 [main] [e5643f16-94ea-436f-ad71-54bee1c91381] MQWriteFile - ReasonCode:2085
[DEBUG] 2020-06-13 22:48:03.877+0300 [main] [e5643f16-94ea-436f-ad71-54bee1c91381] MQWriteFile - Completion Code:2
[ERROR] 2020-06-13 22:48:03.877+0300 [main] [e5643f16-94ea-436f-ad71-54bee1c91381] MQWriteFile - Message:MQJE001: Completion Code '2', Reason '2085'.
com.ibm.mq.MQException: MQJE001: Completion Code '2', Reason '2085'
at com.ibm.mq.MQDestination.open(MQDestination.java:322) ~[com.ibm.mq.jar:9.0.0.5 - p900-005-180821]
at com.ibm.mq.MQQueue.<init>(MQQueue.java:236) ~[com.ibm.mq.jar:9.0.0.5 - p900-005-180821]
at com.ibm.mq.MQQueueManager.accessQueue(MQQueueManager.java:3288) ~[com.ibm.mq.jar:9.0.0.5 - p900-005-180821]
at custom.MQWriteFile.write(MQWriteFile.java:364) ~[PGPEncryptedSOAPWMQWriter.jar:?]
at custom.MQWriteFile.<init>(MQWriteFile.java:221) [PGPEncryptedSOAPWMQWriter.jar:?]
at custom.PGPEncryptedSOAPWMQWriter.main(PGPEncryptedSOAPWMQWriter.java:69) [PGPEncryptedSOAPWMQWriter.jar:?]
[INFO ] 2020-06-13 22:48:03.879+0300 [main] [e5643f16-94ea-436f-ad71-54bee1c91381] MQWriteFile - LogStatusInDB
[DEBUG] 2020-06-13 22:48:03.911+0300 [main] [e5643f16-94ea-436f-ad71-54bee1c91381] MQWriteFile - Reason Code Desc:MQRC_UNKNOWN_OBJECT_NAME
[DEBUG] 2020-06-13 22:48:03.911+0300 [main] [e5643f16-94ea-436f-ad71-54bee1c91381] MQWriteFile - Completion Code Desc:MQCC_FAILED
[DEBUG] 2020-06-13 22:48:03.911+0300 [main] [e5643f16-94ea-436f-ad71-54bee1c91381] MQWriteFile - Returning with:3
Most likely the cause will be logic flow related with variables or objects falling out of scope, then coming back into scope with reset / default values.
The traces that you are running, will tell you which values your code is actually using. You will most likely need to add logging into your application to determine why the values are being lost.

elasticsearch httpClient3.x bulk api

I use httpClient3.x to _bulk operator the elasticsearch 4.5.1 restful api:
postUrl = "http://127.0.0.1:9200/_bulk";
postMethod = new PostMethod(postUrl);
query = "{\"delete\":{\"_index\":\"equipment\", \"_type\":\"unit\", \"_id\":\"3\" } }" + "\n";
System.out.println("query ="+query);
requestEntity = new StringRequestEntity(query,
"application/x-ndjson", "UTF-8");
postMethod.setRequestEntity(requestEntity);
statusCode = httpClient.executeMethod(postMethod);
postMethod.getResponseBodyAsStream();
System.out.println("Bulk Response status code: " + statusCode);
System.out.println("Bulk Response body: ");
System.out.println(postMethod.getResponseBodyAsString());
the console return:
query ={"delete":{"_index":"equipment", "_type":"unit", "_id":"3" } }
Bulk Response status code: 400
Bulk Response body:
{"error":{"root_cause":[{"type":"parsing_exception","reason":"Unknown key for a START_OBJECT in [delete].","line":1,"col":11}],"type":"parsing_exception","reason":"Unknown key for a START_OBJECT in [delete].","line":1,"col":11},"status":400}
when i paste the blow code in Kibana,it return "status": 200
POST _bulk
{
"delete": {
"_index": "equipment",
"_type": "unit",
"_id": "1"
}
}
does it because the HttpClient3.x does't support x-ndjson or the query format is wrong or other resons? how can it run _bulk api successfully in HttpClient3.x
Result
2017/08/02 14:29:49:571 CST [DEBUG] HttpClient - Java version: 1.7.0_79
2017/08/02 14:29:49:575 CST [DEBUG] HttpClient - Java vendor: Oracle Corporation
2017/08/02 14:29:49:575 CST [DEBUG] HttpClient - Java class path: /Users/qk/Documents/workspace-luna/httpclient-test/bin:/Users/qk/Documents/workspace-luna/httpclient-test/lib/commons-beanutils.jar:/Users/qk/Documents/workspace-luna/httpclient-test/lib/commons-codec-1.10.jar:/Users/qk/Documents/workspace-luna/httpclient-test/lib/commons-collections4-4.1.jar:/Users/qk/Documents/workspace-luna/httpclient-test/lib/commons-httpclient.jar:/Users/qk/Documents/workspace-luna/httpclient-test/lib/commons-lang.jar:/Users/qk/Documents/workspace-luna/httpclient-test/lib/commons-logging-1.1.3.jar:/Users/qk/Documents/workspace-luna/httpclient-test/lib/ezmorph-1.0.5.jar:/Users/qk/Documents/workspace-luna/httpclient-test/lib/json-lib-2.2-jdk15.jar:/Users/qk/Documents/workspace-luna/httpclient-test/lib/morph-1.1.1.jar
2017/08/02 14:29:49:575 CST [DEBUG] HttpClient - Operating system name: Mac OS X
2017/08/02 14:29:49:575 CST [DEBUG] HttpClient - Operating system architecture: x86_64
2017/08/02 14:29:49:575 CST [DEBUG] HttpClient - Operating system version: 10.11.6
2017/08/02 14:29:49:635 CST [DEBUG] HttpClient - SUN 1.7: SUN (DSA key/parameter generation; DSA signing; SHA-1, MD5 digests; SecureRandom; X.509 certificates; JKS keystore; PKIX CertPathValidator; PKIX CertPathBuilder; LDAP, Collection CertStores, JavaPolicy Policy; JavaLoginConfig Configuration)
2017/08/02 14:29:49:635 CST [DEBUG] HttpClient - SunRsaSign 1.7: Sun RSA signature provider
2017/08/02 14:29:49:635 CST [DEBUG] HttpClient - SunEC 1.7: Sun Elliptic Curve provider (EC, ECDSA, ECDH)
2017/08/02 14:29:49:635 CST [DEBUG] HttpClient - SunJSSE 1.7: Sun JSSE provider(PKCS12, SunX509 key/trust factories, SSLv3, TLSv1)
2017/08/02 14:29:49:635 CST [DEBUG] HttpClient - SunJCE 1.7: SunJCE Provider (implements RSA, DES, Triple DES, AES, Blowfish, ARCFOUR, RC2, PBE, Diffie-Hellman, HMAC)
2017/08/02 14:29:49:635 CST [DEBUG] HttpClient - SunJGSS 1.7: Sun (Kerberos v5, SPNEGO)
2017/08/02 14:29:49:635 CST [DEBUG] HttpClient - SunSASL 1.7: Sun SASL provider(implements client mechanisms for: DIGEST-MD5, GSSAPI, EXTERNAL, PLAIN, CRAM-MD5, NTLM; server mechanisms for: DIGEST-MD5, GSSAPI, CRAM-MD5, NTLM)
2017/08/02 14:29:49:635 CST [DEBUG] HttpClient - XMLDSig 1.0: XMLDSig (DOM XMLSignatureFactory; DOM KeyInfoFactory)
2017/08/02 14:29:49:635 CST [DEBUG] HttpClient - SunPCSC 1.7: Sun PC/SC provider
2017/08/02 14:29:49:636 CST [DEBUG] HttpClient - Apple 1.1: Apple Provider
2017/08/02 14:29:49:639 CST [DEBUG] DefaultHttpParams - Set parameter http.useragent = Jakarta Commons-HttpClient/3.1
2017/08/02 14:29:49:641 CST [DEBUG] DefaultHttpParams - Set parameter http.protocol.version = HTTP/1.1
2017/08/02 14:29:49:642 CST [DEBUG] DefaultHttpParams - Set parameter http.connection-manager.class = class org.apache.commons.httpclient.SimpleHttpConnectionManager
2017/08/02 14:29:49:642 CST [DEBUG] DefaultHttpParams - Set parameter http.protocol.cookie-policy = default
2017/08/02 14:29:49:642 CST [DEBUG] DefaultHttpParams - Set parameter http.protocol.element-charset = US-ASCII
2017/08/02 14:29:49:642 CST [DEBUG] DefaultHttpParams - Set parameter http.protocol.content-charset = ISO-8859-1
2017/08/02 14:29:49:643 CST [DEBUG] DefaultHttpParams - Set parameter http.method.retry-handler = org.apache.commons.httpclient.DefaultHttpMethodRetryHandler#2972a4d0
2017/08/02 14:29:49:644 CST [DEBUG] DefaultHttpParams - Set parameter http.dateparser.patterns = [EEE, dd MMM yyyy HH:mm:ss zzz, EEEE, dd-MMM-yy HH:mm:ss zzz, EEE MMM d HH:mm:ss yyyy, EEE, dd-MMM-yyyy HH:mm:ss z, EEE, dd-MMM-yyyy HH-mm-ss z, EEE, dd MMM yy HH:mm:ss z, EEE dd-MMM-yyyy HH:mm:ss z, EEE dd MMM yyyy HH:mm:ss z, EEE dd-MMM-yyyy HH-mm-ss z, EEE dd-MMM-yy HH:mm:ss z, EEE dd MMM yy HH:mm:ss z, EEE,dd-MMM-yy HH:mm:ss z, EEE,dd-MMM-yyyy HH:mm:ss z, EEE, dd-MM-yyyy HH:mm:ss z]
query ={"delete":{"_index":"equipment", "_type":"unit", "_id":"2" } }
2017/08/02 14:29:49:692 CST [DEBUG] HttpConnection - Open connection to 127.0.0.1:9200
2017/08/02 14:29:49:707 CST [DEBUG] header - >> "POST /_bulk HTTP/1.1[\r][\n]"
2017/08/02 14:29:49:707 CST [DEBUG] HttpMethodBase - Adding Host request header
2017/08/02 14:29:49:719 CST [DEBUG] header - >> "User-Agent: Jakarta Commons-HttpClient/3.1[\r][\n]"
2017/08/02 14:29:49:719 CST [DEBUG] header - >> "Host: 127.0.0.1:9200[\r][\n]"
2017/08/02 14:29:49:719 CST [DEBUG] header - >> "Content-Length: 63[\r][\n]"
2017/08/02 14:29:49:719 CST [DEBUG] header - >> "Content-Type: application/x-ndjson; charset=UTF-8[\r][\n]"
2017/08/02 14:29:49:720 CST [DEBUG] header - >> "[\r][\n]"
2017/08/02 14:29:49:720 CST [DEBUG] content - >> "{"delete":{"_index":"equipment", "_type":"unit", "_id":"2" } }[\n]"
2017/08/02 14:29:49:720 CST [DEBUG] EntityEnclosingMethod - Request body sent
2017/08/02 14:29:49:735 CST [DEBUG] header - << "HTTP/1.1 200 OK[\r][\n]"
2017/08/02 14:29:49:735 CST [DEBUG] header - << "HTTP/1.1 200 OK[\r][\n]"
2017/08/02 14:29:49:736 CST [DEBUG] header - << "Warning: 299 Elasticsearch-5.4.1-2cfe0df "Content type detection for rest requests is deprecated. Specify the content type using the [Content-Type] header." "Wed, 02 Aug 2017 06:29:49 GMT"[\r][\n]"
2017/08/02 14:29:49:736 CST [DEBUG] header - << "content-type: application/json; charset=UTF-8[\r][\n]"
2017/08/02 14:29:49:736 CST [DEBUG] header - << "content-length: 203[\r][\n]"
2017/08/02 14:29:49:737 CST [DEBUG] header - << "[\r][\n]"
Bulk Response status code: 200
Bulk Response body:
2017/08/02 14:29:49:737 CST [DEBUG] HttpMethodBase - Buffering response body
2017/08/02 14:29:49:738 CST [DEBUG] content - << "{"took":1,"errors":false,"items":[{"delete":{"found":false,"_index":"equipment","_type":"unit","_id":"2","_version":3,"result":"not_found","_shards":{"total":2,"successful":1,"failed":0},"status":404}}]}"
2017/08/02 14:29:49:738 CST [DEBUG] HttpMethodBase - Resorting to protocol version default close connection policy
2017/08/02 14:29:49:738 CST [DEBUG] HttpMethodBase - Should NOT close connection, using HTTP/1.1
2017/08/02 14:29:49:738 CST [DEBUG] HttpConnection - Releasing connection back to connection manager.
{"took":1,"errors":false,"items":[{"delete":{"found":false,"_index":"equipment","_type":"unit","_id":"2","_version":3,"result":"not_found","_shards":{"total":2,"successful":1,"failed":0},"status":404}}]}

Ebean and H2 configuration issue with Play framework 2.5

I am trying to develop with Play Framework version 2.5, and I cannot get the database connection right. I am using H2 database with Ebean plugin 3.0.2 as an ORM. I tried several options for the entries in application.conf, based on the information found on Play Framework website and many of your posts. Please find below the entries to my configuration files and the error trace and help me :
**Application.conf**
play.db {
# The combination of these two settings results in "db.default" as the
#default JDBC pool:
config = "db"
default = "default"
prototype {
# Sets a fixed JDBC connection pool size of 50
#hikaricp.minimumIdle = 50
#hikaricp.maximumPoolSize = 50
}
}
db {
default.hikaricp.dataSourceClassName = org.h2.jdbcx.JdbcDataSource
default.driver = org.h2.Driver
default.url = "jdbc:h2:mem:play"
default.username = sa
default.password = ""
ebean.default = ["models.*"]
#play.ebean.default.dataSource = default
default.logSql=true
}
**Plugins.sbt**
addSbtPlugin("com.typesafe.play" % "sbt-plugin" % "2.5.10")
// Web plugins
addSbtPlugin("com.typesafe.sbt" % "sbt-coffeescript" % "1.0.0")
addSbtPlugin("com.typesafe.sbt" % "sbt-less" % "1.1.0")
addSbtPlugin("com.typesafe.sbt" % "sbt-jshint" % "1.0.4")
addSbtPlugin("com.typesafe.sbt" % "sbt-rjs" % "1.0.8")
addSbtPlugin("com.typesafe.sbt" % "sbt-digest" % "1.1.1")
addSbtPlugin("com.typesafe.sbt" % "sbt-mocha" % "1.1.0")
addSbtPlugin("org.irundaia.sbt" % "sbt-sassify" % "1.4.6")
addSbtPlugin("com.typesafe.sbt" % "sbt-play-enhancer" % "1.1.0")
enablePlugins(PlayEbean).
addSbtPlugin("com.typesafe.sbt" % "sbt-play-ebean" % "3.0.2")
addSbtPlugin("com.typesafe.sbteclipse" % "sbteclipse-plugin" %
"5.0.1")
**Build.sbt**
name := """Institut"""
version := "1.0-SNAPSHOT"
lazy val Institut = (project in
file(".")).enablePlugins(PlayJava,PlayEbean)
scalaVersion := "2.11.7"
libraryDependencies ++= Seq(
javaJdbc,
cache,
javaWs,
evolutions
)
**Console output**
[info] application - Creating Pool for datasource 'ebean'
[error] c.z.h.HikariConfig - HikariPool-1 - dataSource or
dataSourceClassName
or jdbcUrl is required.
[info] application - Creating Pool for datasource 'ebean'
[error] c.z.h.HikariConfig - HikariPool-2 - dataSource or
dataSourceClassName
or jdbcUrl is required.
[info] application - Creating Pool for datasource 'ebean'
[error] c.z.h.HikariConfig - HikariPool-3 - dataSource or
dataSourceClassName
or jdbcUrl is required.
[info] application - Creating Pool for datasource 'ebean'
[error] c.z.h.HikariConfig - HikariPool-4 - dataSource or
dataSourceClassName
or jdbcUrl is required.
[error] application -
! #736eodoo7 - Internal server error, for (GET) [/] ->
play.api.Configuration$$anon$1: Configuration error[Cannot connect to
database [ebean]]
at play.api.Configuration$.configError(Configuration.scala:154)
at play.api.Configuration.reportError(Configuration.scala:806)
at
play.api.db.DefaultDBApi$$anonfun$connect$1.apply(DefaultDBApi.scala:48)
at playi.db.DefaultDBApi$$anonfun$connect$1.apply(DefaultDBApi.scala:42)
at scala.collection.immutable.List.foreach(List.scala:381)
at play.api.db.DefaultDBApi.connect(DefaultDBApi.scala:42)
at play.api.db.DBApiProvider.get$lzycompute(DBModule.scala:72)
at play.api.db.DBApiProvider.get(DBModule.scala:62)
at play.api.db.DBApiProvider.get(DBModule.scala:58)
at
Caused by: play.api.Configuration$$anon$1: Configuration
error[dataSource or
dataSourceClassName or jdbcUrl is required.]
at play.api.Configuration$.configError(Configuration.scala:154)
at play.api.PlayConfig.reportError(Configuration.scala:996)
at play.api.db.HikariCPConnectionPool.create(HikariCPModule.scala:70)
at play.api.db.PooledDatabase.createDataSource(Databases.scala:199)
at
play.api.db.DefaultDatabase.dataSource$lzycompute(Databases.scala:123)
at play.api.db.DefaultDatabase.dataSource(Databases.scala:121)
at play.api.db.DefaultDatabase.getConnection(Databases.scala:142)
at play.api.db.DefaultDatabase.getConnection(Databases.scala:138)
at
play.api.db.DefaultDBApi$$anonfun$connect$1.apply(DefaultDBApi.scala:44)
at
play.api.db.DefaultDBApi$$anonfun$connect$1.apply(DefaultDBApi.scala:42)
Caused by: java.lang.IllegalArgumentException: dataSource or
dataSourceClassName or jdbcUrl is required.
at com.zaxxer.hikari.HikariConfig.validate(HikariConfig.java:786)
at play.api.db.HikariCPConfig.toHikariConfig(HikariCPModule.scala:141)
at
at scala.util.Try$.apply(Try.scala:192)
at play.api.db.HikariCPConnectionPool.create(HikariCPModule.scala:54)
at play.api.db.PooledDatabase.createDataSource(Databases.scala:199)
at
play.api.db.DefaultDatabase.dataSource$lazycompute(Databases.scala:123)
at play.api.db.DefaultDatabase.dataSource(Databases.scala:121)
at play.api.db.DefaultDatabase.getConnection(Databases.scala:142)
I had the same error message with H2 configuration , which was resolved be adding the dataSourceClassName, however I have tried to use default as datasource for Ebean without success.
Thank you !
ebean.default = ["models.*"] is not a member of the key db.default you have to place it outside of db.
According to your log ebean thinks that this key is another database
play.api.Configuration$$anon$1: Configuration error[Cannot connect to
database [ebean]]
More info here:
https://www.playframework.com/documentation/2.5.x/JavaDatabase#H2-database-engine-connection-properties

Gradle transforms https maven repository to http 443 request

My build.gradle is configured as:
repositories {
mavenLocal()
mavenCentral()
jcenter()
maven {
url "https://<myrepo>/repo"
}
}
However,
$ gradle build --debug
gives me:
[...]
12:01:58.487 [DEBUG] [org.gradle.api.internal.artifacts.ivyservice.IvyLoggingAdaper] setting 'https.proxyHost' to '<myrepo>'
[...]
12:01:59.070 [DEBUG] [org.gradle.internal.resource.transport.http.HttpClientHelper] Performing HTTP GET: https://repo1.maven.org/maven2/org/xbib/archive/maven-metadata.xml
12:01:59.316 [DEBUG] [org.apache.http.client.protocol.RequestAddCookies] CookieSpec selected: default
12:01:59.324 [DEBUG] [org.apache.http.client.protocol.RequestAuthCache] Auth cache not set in the context
12:01:59.325 [DEBUG] [org.apache.http.impl.conn.PoolingHttpClientConnectionManager] Connection request: [route: {tls}->http://<myrepo>:443->https://repo1.maven.org:443][total kept alive: 0; route allocated: 0 of 2; total allocated: 0 of 20]
12:01:59.336 [DEBUG] [org.apache.http.impl.conn.PoolingHttpClientConnectionManager] Connection leased: [id: 0][route: {tls}->http://<myrepo>:443->https://repo1.maven.org:443][total kept alive: 0; route allocated: 1 of 2; total allocated: 1 of 20]
12:01:59.337 [DEBUG] [org.apache.http.impl.execchain.MainClientExec] Opening connection {tls}->http://<myrepo>:443->https://repo1.maven.org:443
12:01:59.340 [DEBUG] [org.apache.http.impl.conn.DefaultHttpClientConnectionOperator] Connecting to <myrepo>/<reposerverIP>:443
12:01:59.342 [DEBUG] [org.apache.http.impl.conn.DefaultHttpClientConnectionOperator] Connection established <localIP>:49298<-><reposerverIP>:443
12:01:59.346 [DEBUG] [org.apache.http.impl.conn.DefaultHttpResponseParser] Garbage in response:
[...]
12:01:59.347 [DEBUG] [org.apache.http.impl.conn.DefaultManagedHttpClientConnection] http-outgoing-0: Close connection
12:01:59.347 [DEBUG] [org.apache.http.impl.conn.DefaultManagedHttpClientConnection] http-outgoing-0: Shutdown connection
12:01:59.348 [DEBUG] [org.apache.http.impl.execchain.MainClientExec] Connection discarded
12:01:59.348 [DEBUG] [org.apache.http.impl.conn.DefaultManagedHttpClientConnection] http-outgoing-0: Close connection
[...]
...though I don't know, why Gradle feels motivated to transform "https" configuration into "http: ... :443". Anyone having a configuration idea?
As I wasn't able to find the configuration error itself, I am happy to have the problem solved by simply
uninstalling Gradle completely
restarting Ubuntu and
installing Gradle 2.14 again.

Resources