I am trying to connect Azure sql db from Databricks with AAD - Password auth. I imported azure sql db& adal4j libs. but still getting below error
java.lang.NoClassDefFoundError: com/nimbusds/oauth2/sdk/AuthorizationGrant
stack trace:
at com.microsoft.sqlserver.jdbc.SQLServerADAL4JUtils.getSqlFedAuthToken(SQLServerADAL4JUtils.java:24)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.getFedAuthToken(SQLServerConnection.java:3609)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.onFedAuthInfo(SQLServerConnection.java:3580)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.processFedAuthInfo(SQLServerConnection.java:3548)
at com.microsoft.sqlserver.jdbc.TDSTokenHandler.onFedAuthInfo(tdsparser.java:261)
at com.microsoft.sqlserver.jdbc.TDSParser.parse(tdsparser.java:103)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.sendLogon(SQLServerConnection.java:4290)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.logon(SQLServerConnection.java:3157)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.access$100(SQLServerConnection.java:82)
at com.microsoft.sqlserver.jdbc.SQLServerConnection$LogonCommand.doExecute(SQLServerConnection.java:3121)
at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7151)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:2478)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectHelper(SQLServerConnection.java:2026)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.login(SQLServerConnection.java:1687)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectInternal(SQLServerConnection.java:1528)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connect(SQLServerConnection.java:866)
at com.microsoft.sqlserver.jdbc.SQLServerDriver.connect(SQLServerDriver.java:569)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:63)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:54)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:56)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.<init>(JDBCRelation.scala:115)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:5
at com.databricks.backend.daemon.driver.DriverWrapper.tryExecutingCommand(DriverWrapper.scala:590)
at com.databricks.backend.daemon.driver.DriverWrapper.getCommandOutputAndError(DriverWrapper.scala:474)
at com.databricks.backend.daemon.driver.DriverWrapper.executeCommand(DriverWrapper.scala:548)
at com.databricks.backend.daemon.driver.DriverWrapper.runInnerLoop(DriverWrapper.scala:380)
at com.databricks.backend.daemon.driver.DriverWrapper.runInner(DriverWrapper.scala:327)
at com.databricks.backend.daemon.driver.DriverWrapper.run(DriverWrapper.scala:215)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: com.nimbusds.oauth2.sdk.AuthorizationGrant
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
imported nimbusds lib into my workspace.
here is config
import com.microsoft.azure.sqldb.spark.config.Config
import com.microsoft.azure.sqldb.spark.connect._
import org.apache.spark.sql.SparkSession
val spark: SparkSession = SparkSession.builder().getOrCreate()
val config = Config(Map(
"url" -> "ServerName.database.windows.net",
"databaseName" -> "dbname",
"dbTable" -> "dbo.test",
"user" -> "alias#domain.com",
"password" -> "pwd",
"authentication" -> "ActiveDirectoryPassword",
"encrypt" -> "true",
"trustServerCertificate"->"false",
"hostNameInCertificate"->"*.database.windows.net"
))
val collection = spark.read.sqlDB(config)
collection.show()
please help me if any one resolved this issue.
Click here to download a working notebook.
Create a Databricks Cluster
Known working configuration - Databricks Runtime 5.2 (includes Apache Spark 2.4.0, Scala 2.11)
Install the Spark Connector for Microsoft Azure SQL Database and SQL Server
Navigate to Cluster > Library > Install New > Maven > Search Packages
Switch to Maven Central
Search for azure-sqldb-spark (com.microsoft.azure:azure-sqldb-spark)
Click Select
Click Install
Known working version - com.microsoft.azure:azure-sqldb-spark:1.0.2
Update Variables
Update variable values (custerName, server, database, table, username, password)
Run the Initialisation command (ONCE ONLY)
This will do the following:
Create a folder called init under dbfs:/databricks/init/
Creta a sub-folder with the name of the Databricks cluster
Create a bash script per dependency
Bash Script Commands:
* wget: Retrieve content from a web server
* --quit: Turns off wget's output
* -O: Output
Dependencies:
http://central.maven.org/maven2/com/microsoft/azure/adal4j/1.6.0/adal4j-1.6.0.jar
http://central.maven.org/maven2/com/nimbusds/oauth2-oidc-sdk/5.24.1/oauth2-oidc-sdk-5.24.1.jar
http://central.maven.org/maven2/net/minidev/json-smart/1.1.1/json-smart-1.1.1.jar
http://central.maven.org/maven2/com/nimbusds/nimbus-jose-jwt/7.0.1/nimbus-jose-jwt-7.0.1.jar
Restart the Databricks Cluster
This is needed to execute the init script.
Run the last cell in this Notebook
This will test the ability to connect to an Azure SQL Database via Active Directory authentication.
Init Command
// Initialisation
// This code block only needs to be run once to create the init script for the cluster (file remains on restart)
// Get the cluster name
var clusterName = dbutils.widgets.get("cluster")
// Create dbfs:/databricks/init/ if it doesn’t exist.
dbutils.fs.mkdirs("dbfs:/databricks/init/")
// Create a directory named (clusterName) using Databricks File System - DBFS.
dbutils.fs.mkdirs(s"dbfs:/databricks/init/$clusterName/")
// Create the adal4j script.
dbutils.fs.put(s"/databricks/init/$clusterName/adal4j-install.sh","""
#!/bin/bash
wget --quiet -O /mnt/driver-daemon/jars/adal4j-1.6.0.jar http://central.maven.org/maven2/com/microsoft/azure/adal4j/1.6.0/adal4j-1.6.0.jar
wget --quiet -O /mnt/jars/driver-daemon/adal4j-1.6.0.jar http://central.maven.org/maven2/com/microsoft/azure/adal4j/1.6.0/adal4j-1.6.0.jar""", true)
// Create the oauth2 script.
dbutils.fs.put(s"/databricks/init/$clusterName/oauth2-install.sh","""
#!/bin/bash
wget --quiet -O /mnt/driver-daemon/jars/oauth2-oidc-sdk-5.24.1.jar http://central.maven.org/maven2/com/nimbusds/oauth2-oidc-sdk/5.24.1/oauth2-oidc-sdk-5.24.1.jar
wget --quiet -O /mnt/jars/driver-daemon/oauth2-oidc-sdk-5.24.1.jar http://central.maven.org/maven2/com/nimbusds/oauth2-oidc-sdk/5.24.1/oauth2-oidc-sdk-5.24.1.jar""", true)
// Create the json script.
dbutils.fs.put(s"/databricks/init/$clusterName/json-smart-install.sh","""
#!/bin/bash
wget --quiet -O /mnt/driver-daemon/jars/json-smart-1.1.1.jar http://central.maven.org/maven2/net/minidev/json-smart/1.1.1/json-smart-1.1.1.jar
wget --quiet -O /mnt/jars/driver-daemon/json-smart-1.1.1.jar http://central.maven.org/maven2/net/minidev/json-smart/1.1.1/json-smart-1.1.1.jar""", true)
// Create the jwt script.
dbutils.fs.put(s"/databricks/init/$clusterName/jwt-install.sh","""
#!/bin/bash
wget --quiet -O /mnt/driver-daemon/jars/nimbus-jose-jwt-7.0.1.jar http://central.maven.org/maven2/com/nimbusds/nimbus-jose-jwt/7.0.1/nimbus-jose-jwt-7.0.1.jar
wget --quiet -O /mnt/jars/driver-daemon/nimbus-jose-jwt-7.0.1.jar http://central.maven.org/maven2/com/nimbusds/nimbus-jose-jwt/7.0.1/nimbus-jose-jwt-7.0.1.jar""", true)
// Check that the cluster-specific init script exists.
display(dbutils.fs.ls(s"dbfs:/databricks/init/$clusterName/"))
Test Command
// Connect to Azure SQL Database via Active Directory Password Authentication
import com.microsoft.azure.sqldb.spark.config.Config
import com.microsoft.azure.sqldb.spark.connect._
// Get Widget Values
var server = dbutils.widgets.get("server")
var database = dbutils.widgets.get("database")
var table = dbutils.widgets.get("table")
var username = dbutils.widgets.get("user")
var password = dbutils.widgets.get("password")
val config = Config(Map(
"url" -> s"$server.database.windows.net",
"databaseName" -> s"$database",
"dbTable" -> s"$table",
"user" -> s"$username",
"password" -> s"$password",
"authentication" -> "ActiveDirectoryPassword",
"encrypt" -> "true",
"ServerCertificate" -> "false",
"hostNameInCertificate" -> "*.database.windows.net"
))
val collection = sqlContext.read.sqlDB(config)
collection.show()
As a 2020 update:
I did cluster init scripts as described, but in the end my working setup didn't seem to require that.
I ended up using scala 2.11 (note 2.11) and these libraries installed through UI: com.microsoft.azure:azure-sqldb-spark:1.0.2 and mssql_jdbc_8_2_2_jre8.jar (note jre8). Also I had to explicitly mention the driver class in config:
import com.microsoft.azure.sqldb.spark.config.Config
import com.microsoft.azure.sqldb.spark.connect._
val config = Config(Map(
"url" -> "....database.windows.net",
"databaseName" -> "...",
"dbTable" -> "...",
"accessToken" -> "...",
"hostNameInCertificate" -> "*.database.windows.net",
"encrypt" -> "true",
"ServerCertificate" -> "false",
"driver" -> "com.microsoft.sqlserver.jdbc.SQLServerDriver"
))
val collection = spark.read.sqlDB(config)
collection.show()
Token acquisition was done with msal (python):
import msal
TenantId = "...guid..."
authority = "https://login.microsoftonline.com/" + TenantId
scope = "https://database.windows.net//.default" #🤦 yes, with double "//"
ServicePrincipalId = "...guid..."
ServicePrincipalPwd = "secret"
app = msal.ConfidentialClientApplication(client_id=ServicePrincipalId, authority=authority, client_credential=ServicePrincipalPwd, )
result = None
result = app.acquire_token_silent(scopes=[scope], account=None)
if not result:
result = app.acquire_token_for_client(scope)
if "access_token" in result:
sqlAzureAccessToken = result["access_token"]
I'm starting to learn and experiment with slick.
I'm trying to connect to an oracle dev database, set up by our DBA.
However i am encountering issue and i can't connect.
Here is what i did so far:
oracledev = {
url = "jdbc:oracle:thin:#//vdevdbms2:4208/TPSDEV.IADB.ORG"
driver = com.typesafe.slick.driver.oracle.OracleDriver
connectionPool = disable
keepAliveConnection = true
}
I have the following in my build
resolvers += "Typesafe Releases" at "http://repo.typesafe.com/typesafe/maven-releases/"
libraryDependencies ++=
Seq(
"com.smartlogic.osclient" % "Semaphore-OS-Client" % "Semaphore-3.7.2",
"com.typesafe.slick" %% "slick-extensions" % "3.1.0",
"org.slf4j" % "slf4j-nop" % "1.6.4"
)
The code so far is simply:
object SlickSpike extends App {
val db = Database.forConfig("oracledev")
}
I get the following error:
Exception in thread "main" java.lang.ClassNotFoundException: disable
at java.lang.ClassLoader.findClass(ClassLoader.java:530) at
java.lang.ClassLoader.loadClass(ClassLoader.java:424) at
java.lang.ClassLoader.loadClass(ClassLoader.java:357) at
slick.util.ClassLoaderUtil$$anon$1.loadClass(ClassLoaderUtil.scala:12)
at slick.jdbc.JdbcDataSource$.loadFactory$1(JdbcDataSource.scala:30)
at slick.jdbc.JdbcDataSource$.forConfig(JdbcDataSource.scala:39) at
slick.jdbc.JdbcBackend$DatabaseFactoryDef$class.forConfig(JdbcBackend.scala:268)
at slick.jdbc.JdbcBackend$$anon$3.forConfig(JdbcBackend.scala:33) at
SlickSpike$.delayedEndpoint$SlickSpike$1(SlickSpike.scala:16) at
SlickSpike$delayedInit$body.apply(SlickSpike.scala:14) at
scala.Function0$class.apply$mcV$sp(Function0.scala:34) at
scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
at scala.App$$anonfun$main$1.apply(App.scala:76) at
scala.App$$anonfun$main$1.apply(App.scala:76) at
scala.collection.immutable.List.foreach(List.scala:381) at
scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
at scala.App$class.main(App.scala:76) at
SlickSpike$.main(SlickSpike.scala:14) at
SlickSpike.main(SlickSpike.scala) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483) at
com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
If i remove the line:
**
connectionPool = disable
**
Then i get the following error:
Exception in thread "main" java.lang.ClassNotFoundException:
slick.jdbc.hikaricp.HikariCPJdbcDataSource$ at
java.lang.ClassLoader.findClass(ClassLoader.java:530) at
java.lang.ClassLoader.loadClass(ClassLoader.java:424) at
java.lang.ClassLoader.loadClass(ClassLoader.java:357) at
slick.util.ClassLoaderUtil$$anon$1.loadClass(ClassLoaderUtil.scala:12)
at slick.jdbc.JdbcDataSource$.loadFactory$1(JdbcDataSource.scala:30)
at slick.jdbc.JdbcDataSource$.forConfig(JdbcDataSource.scala:35) at
slick.jdbc.JdbcBackend$DatabaseFactoryDef$class.forConfig(JdbcBackend.scala:268)
at slick.jdbc.JdbcBackend$$anon$3.forConfig(JdbcBackend.scala:33) at
SlickSpike$.delayedEndpoint$SlickSpike$1(SlickSpike.scala:16) at
SlickSpike$delayedInit$body.apply(SlickSpike.scala:14) at
scala.Function0$class.apply$mcV$sp(Function0.scala:34) at
scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
at scala.App$$anonfun$main$1.apply(App.scala:76) at
scala.App$$anonfun$main$1.apply(App.scala:76) at
scala.collection.immutable.List.foreach(List.scala:381) at
scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
at scala.App$class.main(App.scala:76) at
SlickSpike$.main(SlickSpike.scala:14) at
SlickSpike.main(SlickSpike.scala)
What am I doing wrong ?
I would simply like, to have a connection pool of 10, and connect to the database but i have no idea of how to set it up. Can someone help ?
Edit2
I solve the initial issue but i still have question and can't get everything to work.
I change my build as such:
libraryDependencies ++=
Seq(
"org.slf4j" % "slf4j-api" % "1.7.13",
"org.slf4j" % "slf4j-simple" % "1.7.13",
"com.smartlogic.osclient" % "Semaphore-OS-Client" % "Semaphore-3.7.2" exclude("org.slf4j","slf4j-log4j12"),
"com.typesafe.slick" %% "slick" % "3.1.0",
"com.typesafe.slick" %% "slick-extensions" % "3.1.0",
"com.typesafe.slick" %% "slick-hikaricp" % "3.1.0",
"com.oracle" % "ojdbc6" % "11.2.0.2.0"
)
I resorted to add slick-hikaricp, even if I did not intent to use originally.
Also I understood now that the oracle driver in the config, was the actual oracle drive, not the slick one. This is actually reflected in the change i did to my config as can be seen below:
oracledev = {
url = "jdbc:oracle:thin:#//vdevdbms2:4208/TPSDEV.IADB.ORG"
driver = oracle.jdbc.OracleDriver
// connectionPool = disable
keepAliveConnection = true
//databaseName = "BRIKPOOLPARTYDEV"
user = "*******"
password = "*******"
}
Questions:
1 - Is slick-hikaricp required by default when using Oracle. Indeed if i do not add it and comment out connectionPool = disable, which in my case, do not work when uncommented anyway, the program does not compile.
2 - I'm still not able to connect, am i missing something ?
Please help
the oracle, db2, and ms sql drivers are not free. there is a separate 'slick-extensions' package that contains drivers for them that you can use for development. but you have to fork over cold cash for production use of them.
You should download and put the ojdbc7.jar in a folder in your root project (lib) and have it rebuilt.
The entry in the config for connectionPool should be disabled instead of disable.
This config entry is used when deciding what JdbcDataSource will be loaded.
The disable config entry tries to load a JdbcDataSourceFactoryfactory with the name disable.
I am unable to use json4s-Jackson 3.2.11 within my spark 1.4.1 Streaming application.
Thinking that it was the existing dependency within the spark-core project that is causing the problem as explained here -> Is it possible to use json4s 3.2.11 with Spark 1.3.0? I have built Spark from source with an adjusted core/pom.xml. I have changed the reference from json4s-jackson_2.10:3.2.10 to 3.2.11, as the 2.10 version does not support extracting to implicit types.
I have replaced the source jars that are referenced in my intellij IDEA project with the rebuilt jars, however I am still getting the same errors as before. I fear that Spark must still be referencing json4s 3.2.10 somehow?
here is my simple test:
object StreamingPredictor {
implicit val formats = DefaultFormats
case class event(Key: String,
sensorId: String,
sessionId: String,
deviceId: String,
playerId: String,
impressionId: String,
time: String,
eventName: String,
eventProperties: Map[String, Any],
dl: Array[List[(String, Any)]],
$post: Boolean,
$sync: Boolean)
def parser(json: String): String = {
val parsedJson = parse(json)
val foo = parsedJson.extract[event]
foo.eventName
}
def main(args: Array[String]) {
val zkQuorum = "localhost:2181"
val group = "myGroup"
val topic = Map("test" -> 1)
val sparkContext = new SparkContext("local[4]","KafkaConsumer")
val ssc = new StreamingContext(sparkContext, Seconds(1))
val json = KafkaUtils.createStream(ssc, zkQuorum, group, topic)
val eventName = json.map(_._2).map(parser)
eventName.print()
ssc.start()
}
}
The error I get when referencing json4s 3.2.11 in my application pom.xml file:
java.lang.NoSuchMethodError: org.json4s.jackson.JsonMethods$.render(Lorg/json4s/JsonAST$JValue;)Lorg/json4s/JsonAST$JValue;
at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$1.apply(EventLoggingListener.scala:143)
at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$1.apply(EventLoggingListener.scala:143)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.EventLoggingListener.logEvent(EventLoggingListener.scala:143)
at org.apache.spark.scheduler.EventLoggingListener.onBlockManagerAdded(EventLoggingListener.scala:174)
at org.apache.spark.scheduler.SparkListenerBus$class.onPostEvent(SparkListenerBus.scala:46)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.util.ListenerBus$class.postToAll(ListenerBus.scala:56)
at org.apache.spark.util.AsynchronousListenerBus.postToAll(AsynchronousListenerBus.scala:37)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(AsynchronousListenerBus.scala:79)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1215)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1.run(AsynchronousListenerBus.scala:63)
And the error I get when i use json4s-jackson_2.10:3.2.10 in my application pom.xml file:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1, localhost): org.json4s.package$MappingException: No usable value for eventProperties
No information known about type
at org.json4s.reflect.package$.fail(package.scala:96)
at org.json4s.Extraction$ClassInstanceBuilder.org$json4s$Extraction$ClassInstanceBuilder$$buildCtorArg(Extraction.scala:443)
at org.json4s.Extraction$ClassInstanceBuilder$$anonfun$14.apply(Extraction.scala:463)
at org.json4s.Extraction$ClassInstanceBuilder$$anonfun$14.apply(Extraction.scala:463)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractTraversable.map(Traversable.scala:105)
at org.json4s.Extraction$ClassInstanceBuilder.org$json4s$Extraction$ClassInstanceBuilder$$instantiate(Extraction.scala:451)
at org.json4s.Extraction$ClassInstanceBuilder$$anonfun$result$6.apply(Extraction.scala:491)
at org.json4s.Extraction$ClassInstanceBuilder$$anonfun$result$6.apply(Extraction.scala:488)
at org.json4s.Extraction$.org$json4s$Extraction$$customOrElse(Extraction.scala:500)
at org.json4s.Extraction$ClassInstanceBuilder.result(Extraction.scala:488)
at org.json4s.Extraction$.extract(Extraction.scala:332)
at org.json4s.Extraction$.extract(Extraction.scala:42)
at org.json4s.ExtractableJsonAstNode.extract(ExtractableJsonAstNode.scala:21)
at com.pca.triggar.Streaming.StreamingPredictor$.parser(StreamingPredictor.scala:38)
at com.pca.triggar.Streaming.StreamingPredictor$$anonfun$2.apply(StreamingPredictor.scala:57)
at com.pca.triggar.Streaming.StreamingPredictor$$anonfun$2.apply(StreamingPredictor.scala:57)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$10.next(Iterator.scala:312)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at scala.collection.AbstractIterator.to(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$28.apply(RDD.scala:1276)
at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$28.apply(RDD.scala:1276)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1767)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1767)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
at org.apache.spark.scheduler.Task.run(Task.scala:70)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.json4s.package$MappingException: No information known about type
at org.json4s.Extraction$ClassInstanceBuilder.org$json4s$Extraction$ClassInstanceBuilder$$instantiate(Extraction.scala:465)
at org.json4s.Extraction$ClassInstanceBuilder$$anonfun$result$6.apply(Extraction.scala:491)
at org.json4s.Extraction$ClassInstanceBuilder$$anonfun$result$6.apply(Extraction.scala:488)
at org.json4s.Extraction$.org$json4s$Extraction$$customOrElse(Extraction.scala:500)
at org.json4s.Extraction$ClassInstanceBuilder.result(Extraction.scala:488)
at org.json4s.Extraction$.extract(Extraction.scala:332)
at org.json4s.Extraction$$anonfun$extract$5.apply(Extraction.scala:316)
at org.json4s.Extraction$$anonfun$extract$5.apply(Extraction.scala:316)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.immutable.List.foreach(List.scala:318)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractTraversable.map(Traversable.scala:105)
at org.json4s.Extraction$.extract(Extraction.scala:316)
at org.json4s.Extraction$ClassInstanceBuilder.org$json4s$Extraction$ClassInstanceBuilder$$buildCtorArg(Extraction.scala:431)
... 42 more
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1273)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1264)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1263)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1263)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:730)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1457)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1418)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
Ok, I found the problem. As posted else where, you need to compile against jason4s 3.2.10. Apparently, doing so generates a binary which would then work with Spark (version 1.5 in my case. Same in some earlier versions as well). It has to do with the default parameter in the render() method which shows up in 3.2.11.
I had the same issue with emr 4.3.0 and spark 1.6
solved it with installing json4s in the bootstrap action by :
down load the json4s jar and put it in s3
create the following shell script and put it in s3
#!/bin/bash
set -e
wget -S -T 10 -t 5 https://s3.amazonaws.com/your-bucketname/json4s-native_2.10-3.2.4.jar
mkdir -p /home/hadoop/lib
mv json4s-native_2.10-3.2.4.jar /home/hadoop/lib/
add it as a bootstrap step in the emr launch steps
we have upgraded from 4.0 to 4.5.4 SonarQube. When a new ldap user is trying to login or an administrator tries to show Global Permissions in the settings menu a JDBCError occurs.
Here the error, that occurs, when I want to show global permissions:
2015.04.23 17:35:49 INFO http-bio-0.0.0.0-9100-exec-5 web[sql] 1ms Executed SQL: SELECT role FROM "group_roles" WHERE (resource_id is null and (group_id is null or group_id in(2,1,
3)))
2015.04.23 17:35:49 INFO http-bio-0.0.0.0-9100-exec-5 web[sql] 0ms Executed SQL: SELECT * FROM "user_roles" WHERE ("user_roles".user_id = 2)
2015.04.23 17:35:49 INFO http-bio-0.0.0.0-9100-exec-5 web[sql] 1ms Executed SQL: SELECT "user_roles"."id" AS t0_r0, "user_roles"."user_id" AS t0_r1, "user_roles"."resource_id" AS t
0_r2, "user_roles"."role" AS t0_r3, "users"."id" AS t1_r0, "users"."name" AS t1_r1, "users"."password" AS t1_r2, "users"."email" AS t1_r3, "users"."created" AS t1_r4, "users"."fullna
me" AS t1_r5, "users"."creationdate" AS t1_r6, "users"."disabled" AS t1_r7, "users"."lastactivation" AS t1_r8, "users"."link" AS t1_r9, "users"."accountreactivation" AS t1_r10, "user
s"."category" AS t1_r11, "users"."email" AS t1_r12, "users"."firstname" AS t1_r13, "users"."lastname" AS t1_r14, "users"."login" AS t1_r15, "users"."password" AS t1_r16, "users"."pho
neno" AS t1_r17, "users"."warehouseholderno" AS t1_r18, "users"."login" AS t1_r19, "users"."name" AS t1_r20, "users"."email" AS t1_r21, "users"."crypted_password" AS t1_r22, "users".
"salt" AS t1_r23, "users"."created_at" AS t1_r24, "users"."updated_at" AS t1_r25, "users"."remember_token" AS t1_r26, "users"."remember_token_expires_at" AS t1_r27, "users"."active"
AS t1_r28 FROM "user_roles" LEFT OUTER JOIN "users" ON "users".id = "user_roles".user_id WHERE ("user_roles"."role" = 'profileadmin' AND "user_roles"."resource_id" IS NULL AND "users
"."active" = 't')
2015.04.23 17:35:49 INFO http-bio-0.0.0.0-9100-exec-5 web[sql] 1ms Executed SQL: select 1
2015.04.23 17:35:49 ERROR http-bio-0.0.0.0-9100-exec-5 web[o.s.s.ui.JRubyFacade] Fail to render: http://build:9100/roles/global
ActiveRecord::JDBCError: ERROR: column users.password does not exist
Position: 184: SELECT "user_roles"."id" AS t0_r0, "user_roles"."user_id" AS t0_r1, "user_roles"."resource_id" AS t0_r2, "user_roles"."role" AS t0_r3, "users"."id" AS t1_r0, "users"
."name" AS t1_r1, "users"."password" AS t1_r2, "users"."email" AS t1_r3, "users"."created" AS t1_r4, "users"."fullname" AS t1_r5, "users"."creationdate" AS t1_r6, "users"."disabled"
AS t1_r7, "users"."lastactivation" AS t1_r8, "users"."link" AS t1_r9, "users"."accountreactivation" AS t1_r10, "users"."category" AS t1_r11, "users"."email" AS t1_r12, "users"."first
name" AS t1_r13, "users"."lastname" AS t1_r14, "users"."login" AS t1_r15, "users"."password" AS t1_r16, "users"."phoneno" AS t1_r17, "users"."warehouseholderno" AS t1_r18, "users"."l
ogin" AS t1_r19, "users"."name" AS t1_r20, "users"."email" AS t1_r21, "users"."crypted_password" AS t1_r22, "users"."salt" AS t1_r23, "users"."created_at" AS t1_r24, "users"."updated
_at" AS t1_r25, "users"."remember_token" AS t1_r26, "users"."remember_token_expires_at" AS t1_r27, "users"."active" AS t1_r28 FROM "user_roles" LEFT OUTER JOIN "users" ON "users".id
= "user_roles".user_id WHERE ("user_roles"."role" = 'profileadmin' AND "user_roles"."resource_id" IS NULL AND "users"."active" = 't')
On line #37 of app/views/roles/global.html.erb
34: <%= message("global_permissions.#{permission_key}.desc") -%>
35: 36: 37: <span id="users-<%= permission_key.parameterize -%>"><%= users(permission_key).map(&:name).join(', ') -%>
38: (<%= link_to_edit_roles_permission_form(message('select'), permission_key, nil, "select-users-#{permission_key}") -%>)<br/>
39: 40:
gems/gems/activerecord-2.3.15/lib/active_record/connection_adapters/abstract_adapter.rb:227:in `log'
gems/gems/activerecord-2.3.15/lib/active_record/connection_adapters/abstract_adapter.rb:212:in `log'
gems/gems/activerecord-jdbc-adapter-1.1.3/lib/arjdbc/jdbc/adapter.rb:183:in `execute'
gems/gems/activerecord-jdbc-adapter-1.1.3/lib/arjdbc/jdbc/adapter.rb:275:in `select'
gems/gems/activerecord-jdbc-adapter-1.1.3/lib/arjdbc/jdbc/adapter.rb:202:in `jdbc_select_all'
gems/gems/activerecord-2.3.15/lib/active_record/connection_adapters/abstract/query_cache.rb:60:in `select_all_with_query_cache'
gems/gems/activerecord-2.3.15/lib/active_record/connection_adapters/abstract/query_cache.rb:81:in `cache_sql'
gems/gems/activerecord-2.3.15/lib/active_record/connection_adapters/abstract/query_cache.rb:60:in `select_all_with_query_cache'
gems/gems/activerecord-2.3.15/lib/active_record/associations.rb:1624:in `select_all_rows'
gems/gems/activerecord-2.3.15/lib/active_record/associations.rb:1401:in `find_with_associations'
org/jruby/RubyKernel.java:1268:in `catch'
gems/gems/activerecord-2.3.15/lib/active_record/associations.rb:1399:in `find_with_associations'
gems/gems/activerecord-2.3.15/lib/active_record/base.rb:1580:in `find_every'
gems/gems/activerecord-2.3.15/lib/active_record/base.rb:619:in `find'
gems/gems/activerecord-2.3.15/lib/active_record/base.rb:639:in `all'
app/helpers/roles_helper.rb:24:in `users'
app/views/roles/global.html.erb:37
org/jruby/RubyArray.java:1613:in `each'
app/views/roles/global.html.erb:27
org/jruby/RubyKernel.java:2227:in `send'
gems/gems/actionpack-2.3.15/lib/action_view/renderable.rb:34:in `render'
gems/gems/actionpack-2.3.15/lib/action_view/base.rb:306:in `with_template'
gems/gems/actionpack-2.3.15/lib/action_view/renderable.rb:30:in `render'
and here the error that occurs while login:
2015.04.24 10:35:28 INFO http-bio-0.0.0.0-9100-exec-22 web[sql] 3ms Executed SQL: SELECT * FROM "users" WHERE ("users"."login" = 'a428774') LIMIT 1
2015.04.24 10:35:28 INFO http-bio-0.0.0.0-9100-exec-22 web[sql] 6ms Executed SQL: SELECT * FROM "groups" WHERE ("groups"."name" = 'sonar-users') LIMIT 1
2015.04.24 10:35:28 INFO http-bio-0.0.0.0-9100-exec-22 web[sql] 53ms Executed SQL: SELECT attr.attname, seq.relname FROM pg_class seq, pg_attribute attr, pg_depend dep, pg_namespa
ce name, pg_constraint cons WHERE seq.oid = dep.objid AND seq.relkind = 'S' AND attr.attrelid = dep.refobjid AND attr.attnum = dep.refobjsubid AND attr.attrelid = cons.conrelid AND a
ttr.attnum = cons.conkey[1] AND cons.contype = 'p' AND dep.refobjid = '"users"'::regclass
2015.04.24 10:35:28 INFO http-bio-0.0.0.0-9100-exec-22 web[sql] 2ms Executed SQL: INSERT INTO "users" ("name", "password", "email", "created", "fullname", "creationdate", "disabled
", "lastactivation", "link", "accountreactivation", "category", "firstname", "lastname", "login", "phoneno", "warehouseholderno", "crypted_password", "salt", "created_at", "updated_a
t", "remember_token", "remember_token_expires_at", "active") VALUES('MUSTERMANN, Marc', NULL, 'marc.mustermann#company.com', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, 'a4
28774', NULL, NULL, 'e285ed59429a05e40b250577fee291addedd09a1', '708e1dc34acb880756709e8db0d66999672478dc', '2015-04-24 10:35:28.400000', '2015-04-24 10:35:28.400000', NULL, NULL, 't
') RETURNING "id"
2015.04.24 10:35:28 ERROR http-bio-0.0.0.0-9100-exec-22 web[o.s.s.ui.JRubyFacade] Fail to render: http://build:9100/sessions/login
ActiveRecord::JDBCError: ERROR: column "password" of relation "users" does not exist
Position: 30: INSERT INTO "users" ("name", "password", "email", "created", "fullname", "creationdate", "disabled", "lastactivation", "link", "accountreactivation", "category", "fir
stname", "lastname", "login", "phoneno", "warehouseholderno", "crypted_password", "salt", "created_at", "updated_at", "remember_token", "remember_token_expires_at", "active") VALUES(
'MUSTERMANN, Marc', NULL, 'marc.mustermann#company.com', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, 'a428774', NULL, NULL, 'e285ed59429a05e40b250577fee291addedd09a1', '708
e1dc34acb880756709e8db0d66999672478dc', '2015-04-24 10:35:28.400000', '2015-04-24 10:35:28.400000', NULL, NULL, 't') RETURNING "id"
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/connection_adapters/abstract_adapter.rb:227:in `log'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/connection_adapters/abstract_adapter.rb:212:in `log'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-jdbc-adapter-1.1.3/lib/arjdbc/jdbc/adapter.rb:183:in `execute'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-jdbc-adapter-1.1.3/lib/arjdbc/jdbc/adapter.rb:275:in `select'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-jdbc-adapter-1.1.3/lib/arjdbc/jdbc/adapter.rb:212:in `select_one'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/connection_adapters/abstract/database_statements.rb:19:in `selec
t_value'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-jdbc-adapter-1.1.3/lib/arjdbc/postgresql/adapter.rb:266:in `pg_insert'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/connection_adapters/abstract/query_cache.rb:26:in `insert_with_q
uery_dirty'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/base.rb:2967:in `create'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/timestamp.rb:53:in `create_with_timestamps'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/callbacks.rb:266:in `create_with_callbacks'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/base.rb:2933:in `create_or_update'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/callbacks.rb:250:in `create_or_update_with_callbacks'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/base.rb:2583:in `save'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/validations.rb:1089:in `save_with_validation'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/dirty.rb:79:in `save_with_dirty'
org/jruby/RubyKernel.java:2227:in `send'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/transactions.rb:229:in `with_transaction_returning_status'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/connection_adapters/abstract/database_statements.rb:136:in `tran
saction'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/transactions.rb:182:in `transaction'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/transactions.rb:228:in `with_transaction_returning_status'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/transactions.rb:196:in `save_with_transactions'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/transactions.rb:208:in `rollback_active_record_state!'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/transactions.rb:196:in `save_with_transactions'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/lib/need_authentication.rb:157:in `synchronize'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/connection_adapters/abstract/database_statements.rb:136:in `tran
saction'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/transactions.rb:182:in `transaction'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/lib/need_authentication.rb:122:in `synchronize'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/lib/need_authentication.rb:79:in `external_auth'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/lib/need_authentication.rb:101:in `auth'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/lib/need_authentication.rb:56:in `authenticate?'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/lib/need_authentication.rb:236:in `authenticate'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/app/controllers/sessions_controller.rb:30:in `login'
org/jruby/RubyKernel.java:2223:in `send'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/actionpack-2.3.15/lib/action_controller/base.rb:1333:in `perform_action'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/actionpack-2.3.15/lib/action_controller/filters.rb:617:in `call_filters'
/global/apps/java.build.sonartest/sonarqube-4.5.4/web/WEB-INF/gems/gems/actionpack-2.3.15/lib/action_controller/filters.rb:610:in `perform_action_with_filters'
Here the plugins we use:
-rw-r--r-- 1 tsonar jtest 105 23. Apr 09:30 README.txt
-rw-r--r-- 1 tsonar jtest 2404852 23. Apr 09:31 sonar-checkstyle-plugin-2.3.jar
-rw-r--r-- 1 tsonar jtest 10325 23. Apr 09:30 sonar-cobertura-plugin-1.6.3.jar
-rw-r--r-- 1 tsonar jtest 6228395 23. Apr 09:31 sonar-findbugs-plugin-3.1.jar
-rw-r--r-- 1 tsonar jtest 2507184 23. Apr 09:31 sonar-java-plugin-3.1.jar
-rw-r--r-- 1 tsonar jtest 30646 23. Apr 09:31 sonar-ldap-plugin-1.4.jar
-rw-r--r-- 1 tsonar jtest 3154834 23. Apr 12:33 sonar-pdfreport-plugin-1.4.jar
-rw-r--r-- 1 tsonar jtest 3568440 23. Apr 09:31 sonar-pmd-plugin-2.3.jar
-rw-r--r-- 1 tsonar jtest 15342 23. Apr 09:31 sonar-useless-code-tracker-plugin-1.0.jar
-rw-r--r-- 1 tsonar jtest 340834 23. Apr 09:31 sonar-web-plugin-2.3.jar
-rw-r--r-- 1 tsonar jtest 13488 23. Apr 09:30 sonar-widget-lab-plugin-1.6.jar
For more information please look at the sonar.properties.
thanks in advance
Regards
Gaby
sonar.properties
# This file must contain only ISO 8859-1 characters.
# See http://docs.oracle.com/javase/1.5.0/docs/api/java/util/Properties.html#load(java.io.InputStream)
#
# Property values can:
# - reference an environment variable, for example sonar.jdbc.url= ${env:SONAR_JDBC_URL}
# - be encrypted. See http://redirect.sonarsource.com/doc/settings-encryption.html
#--------------------------------------------------------------------------------------------------
# DATABASE
#
# IMPORTANT: the embedded H2 database is used by default. It is recommended for tests but not for
# production use. Supported databases are MySQL, Oracle, PostgreSQL and Microsoft SQLServer.
# User credentials.
# Permissions to create tables, indices and triggers must be granted to JDBC user.
# The schema must be created first.
sonar.jdbc.username=sonar
sonar.jdbc.password=sonartest
#----- Embedded Database (default)
# It does not accept connections from remote hosts, so the
# server and the analyzers must be executed on the same host.
#sonar.jdbc.url=jdbc:h2:tcp://localhost:9092/sonar
# H2 embedded database server listening port, defaults to 9092
#sonar.embeddedDatabase.port=9092
#----- MySQL 5.x
#sonar.jdbc.url=jdbc:mysql://localhost:3306/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true&useConfigs=maxPerformance
#----- Oracle 10g/11g
# - Only thin client is supported
# - Only versions 11.2.* of Oracle JDBC driver are supported, even if connecting to lower Oracle versions.
# - The JDBC driver must be copied into the directory extensions/jdbc-driver/oracle/
# - If you need to set the schema, please refer to http://jira.codehaus.org/browse/SONAR-5000
#sonar.jdbc.url=jdbc:oracle:thin:#localhost/XE
#----- PostgreSQL 8.x/9.x
# If you don't use the schema named "public", please refer to http://jira.codehaus.org/browse/SONAR-5000
sonar.jdbc.url=jdbc:postgresql://sonartest:5432/test
#----- Microsoft SQLServer 2005/2008
# Only the distributed jTDS driver is supported.
#sonar.jdbc.url=jdbc:jtds:sqlserver://localhost/sonar;SelectMethod=Cursor
#----- Connection pool settings
sonar.jdbc.maxActive=20
sonar.jdbc.maxIdle=5
sonar.jdbc.minIdle=2
sonar.jdbc.maxWait=5000
sonar.jdbc.minEvictableIdleTimeMillis=600000
sonar.jdbc.timeBetweenEvictionRunsMillis=30000
#--------------------------------------------------------------------------------------------------
# WEB SERVER
# Web server is executed in a dedicated Java process. By default its heap size is 768Mb.
# Use the following property to customize JVM options. Enabling the HotSpot Server VM
# mode (-server) is recommended.
# Note that the option -Dfile.encoding=UTF-8 is mandatory.
#sonar.web.javaOpts=-Xmx768m -XX:MaxPermSize=160m -XX:+HeapDumpOnOutOfMemoryError
# Same as previous property, but allows to not repeat all other settings
# like -Djava.awt.headless=true
#sonar.web.javaAdditionalOpts=
# Binding IP address. For servers with more than one IP address, this property specifies which
# address will be used for listening on the specified ports.
# By default, ports will be used on all IP addresses associated with the server.
#sonar.web.host=0.0.0.0
# Web context. When set, it must start with forward slash (for example /sonarqube).
# The default value is root context (empty value).
#sonar.web.context=
# TCP port for incoming HTTP connections. Disabled when value is -1.
sonar.web.port=9100
# TCP port for incoming HTTPS connections. Disabled when value is -1 (default).
#sonar.web.https.port=-1
#
# Recommendation for HTTPS
# SonarQube natively supports HTTPS. However using a reverse proxy
# infrastructure is the recommended way to set up your SonarQube installation
# on production environments which need to be highly secured.
# This allows to fully master all the security parameters that you want.
# HTTPS - the alias used to for the server certificate in the keystore.
# If not specified the first key read in the keystore is used.
#sonar.web.https.keyAlias=
# HTTPS - the password used to access the server certificate from the
# specified keystore file. The default value is "changeit".
#sonar.web.https.keyPass=changeit
# HTTPS - the pathname of the keystore file where is stored the server certificate.
# By default, the pathname is the file ".keystore" in the user home.
# If keystoreType doesn't need a file use empty value.
#sonar.web.https.keystoreFile=
# HTTPS - the password used to access the specified keystore file. The default
# value is the value of sonar.web.https.keyPass.
#sonar.web.https.keystorePass=
# HTTPS - the type of keystore file to be used for the server certificate.
# The default value is JKS (Java KeyStore).
#sonar.web.https.keystoreType=JKS
# HTTPS - the name of the keystore provider to be used for the server certificate.
# If not specified, the list of registered providers is traversed in preference order
# and the first provider that supports the keystore type is used (see sonar.web.https.keystoreType).
#sonar.web.https.keystoreProvider=
# HTTPS - the pathname of the truststore file which contains trusted certificate authorities.
# By default, this would be the cacerts file in your JRE.
# If truststoreFile doesn't need a file use empty value.
#sonar.web.https.truststoreFile=
# HTTPS - the password used to access the specified truststore file.
#sonar.web.https.truststorePass=
# HTTPS - the type of truststore file to be used.
# The default value is JKS (Java KeyStore).
#sonar.web.https.truststoreType=JKS
# HTTPS - the name of the truststore provider to be used for the server certificate.
# If not specified, the list of registered providers is traversed in preference order
# and the first provider that supports the truststore type is used (see sonar.web.https.truststoreType).
#sonar.web.https.truststoreProvider=
# HTTPS - whether to enable client certificate authentication.
# The default is false (client certificates disabled).
# Other possible values are 'want' (certificates will be requested, but not required),
# and 'true' (certificates are required).
#sonar.web.https.clientAuth=false
# HTTPS - comma separated list of encryption ciphers to support for HTTPS connections.
# If specified, only the ciphers that are listed and supported by the SSL implementation will be used.
# By default, the default ciphers for the JVM will be used. Note that this usually means that the weak
# export grade ciphers, for instance RC4, will be included in the list of available ciphers.
# The ciphers are specified using the JSSE cipher naming convention (see
# https://www.openssl.org/docs/apps/ciphers.html)
# Example: sonar.web.https.ciphers=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
#sonar.web.https.ciphers=
# The maximum number of connections that the server will accept and process at any given time.
# When this number has been reached, the server will not accept any more connections until
# the number of connections falls below this value. The operating system may still accept connections
# based on the sonar.web.connections.acceptCount property. The default value is 50 for each
# enabled connector.
#sonar.web.http.maxThreads=50
#sonar.web.https.maxThreads=50
# The minimum number of threads always kept running. The default value is 5 for each
# enabled connector.
#sonar.web.http.minThreads=5
#sonar.web.https.minThreads=5
# The maximum queue length for incoming connection requests when all possible request processing
# threads are in use. Any requests received when the queue is full will be refused.
# The default value is 25 for each enabled connector.
#sonar.web.http.acceptCount=25
#sonar.web.https.acceptCount=25
# Access logs are generated in the file logs/access.log. This file is rolled over when it's 5Mb.
# An archive of 3 files is kept in the same directory.
# Access logs are enabled by default.
#sonar.web.accessLogs.enable=true
# TCP port for incoming AJP connections. Disabled if value is -1. Disabled by default.
#sonar.ajp.port=-1
#--------------------------------------------------------------------------------------------------
# SEARCH INDEX
# Elasticsearch is used to facilitate fast and accurate information retrieval.
# It is executed in a dedicated Java process.
# JVM options. Note that enabling the HotSpot Server VM mode (-server) is recommended.
#sonar.search.javaOpts=-Xmx256m -Xms256m -Xss256k -Djava.net.preferIPv4Stack=true \
# -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 \
# -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError
# Same as previous property, but allows to not repeat all other settings
# like -Djava.awt.headless=true
#sonar.search.javaAdditionalOpts=
# Elasticsearch port. Default is 9001. Use 0 to get a free port.
# This port must be private and must not be exposed to the Internet.
sonar.search.port=9102
#--------------------------------------------------------------------------------------------------
# UPDATE CENTER
# Update Center requires an internet connection to request http://update.sonarsource.org
# It is enabled by default.
sonar.updatecenter.activate=true
# HTTP proxy (default none)
http.proxyHost=proxy
http.proxyPort=4711
# NT domain name if NTLM proxy is used
#http.auth.ntlm.domain=
# SOCKS proxy (default none)
#socksProxyHost=
#socksProxyPort=
# proxy authentication. The 2 following properties are used for HTTP and SOCKS proxies.
http.proxyUser=proxyuser
http.proxyPassword=proxypwd
#--------------------------------------------------------------------------------------------------
# LOGGING
# Level of information displayed in the logs: NONE (default), BASIC (functional information)
# and FULL (functional and technical details)
#sonar.log.profilingLevel=NONE
# Path to log files. Can be absolute or relative to installation directory.
# Default is <installation home>/logs
#sonar.path.logs=logs
#--------------------------------------------------------------------------------------------------
# OTHERS
# Delay in seconds between processing of notification queue. Default is 60 seconds.
#sonar.notifications.delay=60
# Paths to persistent data files (embedded database and search index) and temporary files.
# Can be absolute or relative to installation directory.
# Defaults are respectively <installation home>/data and <installation home>/temp
#sonar.path.data=data
#sonar.path.temp=temp
#--------------------------------------------------------------------------------------------------
# DEVELOPMENT - only for developers
# The following properties MUST NOT be used in production environments.
# Dev mode allows to reload web sources on changes and to restart server when new versions
# of plugins are deployed.
#sonar.web.dev=false
# Path to webapp sources for hot-reloading of Ruby on Rails, JS and CSS (only core,
# plugins not supported).
#sonar.web.dev.sources=/path/to/server/sonar-web/src/main/webapp
# Uncomment to enable the Elasticsearch HTTP connector, so that ES can be directly requested through
# http://lmenezes.com/elasticsearch-kopf/?location=http://localhost:9010
#sonar.search.httpPort=9010
#-------------------
# Sonar LDAP Plugin
#-------------------
# LDAP configuration
# General Configuration
sonar.security.realm=LDAP
sonar.security.savePassword=true
sonar.authenticator.createUsers=true
sonar.security.updateUserAttributes=true
sonar.security.localUsers=admin,analysers
ldap.url=ldap://ldap.company.com
# (optional) Bind DN is the username of an LDAP user to connect (or bind) with.
ldap.bindDn: cn=ldap,ou=oudaten,DC=company,DC=com
# (optional) Bind Password is the password of the user to connect with.
ldap.bindPassword: pwdldap
# User Configuration
ldap.user.baseDn=CN=Users, DC=company,DC=com
ldap.user.request=(&(objectClass=user)(sAMAccountName={login}))
ldap.user.realNameAttribute=cn
ldap.user.emailAttribute=mail
# Group Configuration
#ldap.group.baseDn=ou=Groups,dc=sonarsource,dc=com
#ldap.group.request=(&(objectClass=posixGroup)(memberUid={uid}))
# -------------------------------------------------------------------------
# Properties aelterer Versionen des LDAP-Plugins
# -------------------------------------------------------------------------
# Login Attribute is the attribute in LDAP holding the userâ??s login.
# Default is â??uidâ??. Set â??sAMAccountNameâ?? for Microsoft Active Directory
#ldap.user.loginAttribute: sAMAccountName
# Object class of LDAP users.
# Default is 'inetOrgPerson'. Set â??userâ?? for Microsoft Active Directory.
#ldap.user.objectClass: user
sonar.log.profilingLevel=FULL
I have found the cause. The database schema must be the only schema in the database. Otherwise this error occurs.
I'm trying to create JDBC provider at cell scope with scripting. I've found in the IBM documentation the way to create the JDBCProvider, but it creates provider at the Node scope:
providerName = 'DB2 Universal JDBC Driver Provider'
providerAttribs = [["xa", "false"], ["providerType", providerName], ['isolatedClassLoader', 'false'],
['nativepath', '${DB2UNIVERSAL_JDBC_DRIVER_NATIVEPATH}'],
['classpath', '${DB2UNIVERSAL_JDBC_DRIVER_PATH}/db2jcc.jar;${DB2UNIVERSAL_JDBC_DRIVER_PATH}/db2jcc_license_cisuz.jar']]
provider = AdminJDBC.createJDBCProvider(nodeName, serverName, providerName, 'com.ibm.db2.jcc.DB2ConnectionPoolDataSource', providerAttribs)
I've read the API for createJDBCProviderAtScope: http://pic.dhe.ibm.com/infocenter/wasinfo/v8r0/index.jsp?topic=%2Fcom.ibm.websphere.express.doc%2Finfo%2Fexp%2Fae%2Frxml_7adminjdbc.html and I've updated my code:
providerAttribs = [["xa", "false"], ["providerType", providerName], ['isolatedClassLoader', 'false'],
['nativepath', '${DB2UNIVERSAL_JDBC_DRIVER_NATIVEPATH}'],
['implementationClassName', 'com.ibm.db2.jcc.DB2ConnectionPoolDataSource'],
['classpath', '${DB2UNIVERSAL_JDBC_DRIVER_PATH}/db2jcc.jar;${DB2UNIVERSAL_JDBC_DRIVER_PATH}/db2jcc_license_cisuz.jar']]
provider = AdminJDBC.createJDBCProviderAtScope(cell, "DB2", providerName, providerName, 'Connection pool data source', providerAttribs)
But now I get the exception:
Exception: com.ibm.ws.scripting.ScriptingException com.ibm.ws.scripting.ScriptingException:
com.ibm.ws.scripting.ScriptingException:
com.ibm.websphere.management.cmdframework.CommandNotFoundException:
ADMF0006E: Step xa of command createJDBCProvider is not found.
What is the correct way to create JDBCProvider on the cell scope?
Try this:
providerAttribs = []
providerAttribs.append(["xa", "false"])
providerAttribs.append(['providerType', 'DB2 Universal JDBC Driver Provider'])
providerAttribs.append(['isolatedClassLoader', 'false'])
providerAttribs.append(['nativepath', '${DB2UNIVERSAL_JDBC_DRIVER_NATIVEPATH}'])
providerAttribs.append(['implementationClassName', 'com.ibm.db2.jcc.DB2ConnectionPoolDataSource'])
providerAttribs.append(['classpath', '${DB2UNIVERSAL_JDBC_DRIVER_PATH}/db2jcc.jar;${DB2UNIVERSAL_JDBC_DRIVER_PATH}/db2jcc_license_cisuz.jar'])
providerAttribs.append(['name', 'DB2 Universal JDBC Driver Provider'])
provider = AdminConfig.create("JDBCProvider", AdminConfig.getid('/Cell:/'), providerAttribs)
... I wouldn't be myself if I didn't take this opportunity to advertise WDR library (available at http://wdr.github.io/WDR/)
variables = {}
variables['cellName'] = getid1('/Cell:/').name
loadConfiguration( 'cell_scope_provider.wdrc', variables )
The 'cell_scope_provider.wdrc' referenced above is a file containing configuration manifest:
Cell
*name $[cellName]
JDBCProvider
*name DB2 Universal JDBC Driver Provider
-xa false
-providerType DB2 Universal JDBC Driver Provider
-isolatedClassLoader false
-nativepath ${DB2UNIVERSAL_JDBC_DRIVER_NATIVEPATH}
-implementationClassName com.ibm.db2.jcc.DB2ConnectionPoolDataSource
-classpath ${DB2UNIVERSAL_JDBC_DRIVER_PATH}/db2jcc.jar;${DB2UNIVERSAL_JDBC_DRIVER_PATH}/db2jcc_license_cisuz.jar