getting error while Databricks connection to Azure SQL DB with ActiveDirectoryPassword - jdbc

I am trying to connect Azure sql db from Databricks with AAD - Password auth. I imported azure sql db& adal4j libs. but still getting below error
java.lang.NoClassDefFoundError: com/nimbusds/oauth2/sdk/AuthorizationGrant
stack trace:
at com.microsoft.sqlserver.jdbc.SQLServerADAL4JUtils.getSqlFedAuthToken(SQLServerADAL4JUtils.java:24)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.getFedAuthToken(SQLServerConnection.java:3609)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.onFedAuthInfo(SQLServerConnection.java:3580)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.processFedAuthInfo(SQLServerConnection.java:3548)
at com.microsoft.sqlserver.jdbc.TDSTokenHandler.onFedAuthInfo(tdsparser.java:261)
at com.microsoft.sqlserver.jdbc.TDSParser.parse(tdsparser.java:103)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.sendLogon(SQLServerConnection.java:4290)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.logon(SQLServerConnection.java:3157)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.access$100(SQLServerConnection.java:82)
at com.microsoft.sqlserver.jdbc.SQLServerConnection$LogonCommand.doExecute(SQLServerConnection.java:3121)
at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7151)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:2478)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectHelper(SQLServerConnection.java:2026)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.login(SQLServerConnection.java:1687)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectInternal(SQLServerConnection.java:1528)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connect(SQLServerConnection.java:866)
at com.microsoft.sqlserver.jdbc.SQLServerDriver.connect(SQLServerDriver.java:569)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:63)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:54)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:56)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.<init>(JDBCRelation.scala:115)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:5
at com.databricks.backend.daemon.driver.DriverWrapper.tryExecutingCommand(DriverWrapper.scala:590)
at com.databricks.backend.daemon.driver.DriverWrapper.getCommandOutputAndError(DriverWrapper.scala:474)
at com.databricks.backend.daemon.driver.DriverWrapper.executeCommand(DriverWrapper.scala:548)
at com.databricks.backend.daemon.driver.DriverWrapper.runInnerLoop(DriverWrapper.scala:380)
at com.databricks.backend.daemon.driver.DriverWrapper.runInner(DriverWrapper.scala:327)
at com.databricks.backend.daemon.driver.DriverWrapper.run(DriverWrapper.scala:215)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: com.nimbusds.oauth2.sdk.AuthorizationGrant
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
imported nimbusds lib into my workspace.
here is config
import com.microsoft.azure.sqldb.spark.config.Config
import com.microsoft.azure.sqldb.spark.connect._
import org.apache.spark.sql.SparkSession
val spark: SparkSession = SparkSession.builder().getOrCreate()
val config = Config(Map(
"url" -> "ServerName.database.windows.net",
"databaseName" -> "dbname",
"dbTable" -> "dbo.test",
"user" -> "alias#domain.com",
"password" -> "pwd",
"authentication" -> "ActiveDirectoryPassword",
"encrypt" -> "true",
"trustServerCertificate"->"false",
"hostNameInCertificate"->"*.database.windows.net"
))
val collection = spark.read.sqlDB(config)
collection.show()
please help me if any one resolved this issue.

Click here to download a working notebook.
Create a Databricks Cluster
Known working configuration - Databricks Runtime 5.2 (includes Apache Spark 2.4.0, Scala 2.11)
Install the Spark Connector for Microsoft Azure SQL Database and SQL Server
Navigate to Cluster > Library > Install New > Maven > Search Packages
Switch to Maven Central
Search for azure-sqldb-spark (com.microsoft.azure:azure-sqldb-spark)
Click Select
Click Install
Known working version - com.microsoft.azure:azure-sqldb-spark:1.0.2
Update Variables
Update variable values (custerName, server, database, table, username, password)
Run the Initialisation command (ONCE ONLY)
This will do the following:
Create a folder called init under dbfs:/databricks/init/
Creta a sub-folder with the name of the Databricks cluster
Create a bash script per dependency
Bash Script Commands:
* wget: Retrieve content from a web server
* --quit: Turns off wget's output
* -O: Output
Dependencies:
http://central.maven.org/maven2/com/microsoft/azure/adal4j/1.6.0/adal4j-1.6.0.jar
http://central.maven.org/maven2/com/nimbusds/oauth2-oidc-sdk/5.24.1/oauth2-oidc-sdk-5.24.1.jar
http://central.maven.org/maven2/net/minidev/json-smart/1.1.1/json-smart-1.1.1.jar
http://central.maven.org/maven2/com/nimbusds/nimbus-jose-jwt/7.0.1/nimbus-jose-jwt-7.0.1.jar
Restart the Databricks Cluster
This is needed to execute the init script.
Run the last cell in this Notebook
This will test the ability to connect to an Azure SQL Database via Active Directory authentication.
Init Command
// Initialisation
// This code block only needs to be run once to create the init script for the cluster (file remains on restart)
// Get the cluster name
var clusterName = dbutils.widgets.get("cluster")
// Create dbfs:/databricks/init/ if it doesn’t exist.
dbutils.fs.mkdirs("dbfs:/databricks/init/")
// Create a directory named (clusterName) using Databricks File System - DBFS.
dbutils.fs.mkdirs(s"dbfs:/databricks/init/$clusterName/")
// Create the adal4j script.
dbutils.fs.put(s"/databricks/init/$clusterName/adal4j-install.sh","""
#!/bin/bash
wget --quiet -O /mnt/driver-daemon/jars/adal4j-1.6.0.jar http://central.maven.org/maven2/com/microsoft/azure/adal4j/1.6.0/adal4j-1.6.0.jar
wget --quiet -O /mnt/jars/driver-daemon/adal4j-1.6.0.jar http://central.maven.org/maven2/com/microsoft/azure/adal4j/1.6.0/adal4j-1.6.0.jar""", true)
// Create the oauth2 script.
dbutils.fs.put(s"/databricks/init/$clusterName/oauth2-install.sh","""
#!/bin/bash
wget --quiet -O /mnt/driver-daemon/jars/oauth2-oidc-sdk-5.24.1.jar http://central.maven.org/maven2/com/nimbusds/oauth2-oidc-sdk/5.24.1/oauth2-oidc-sdk-5.24.1.jar
wget --quiet -O /mnt/jars/driver-daemon/oauth2-oidc-sdk-5.24.1.jar http://central.maven.org/maven2/com/nimbusds/oauth2-oidc-sdk/5.24.1/oauth2-oidc-sdk-5.24.1.jar""", true)
// Create the json script.
dbutils.fs.put(s"/databricks/init/$clusterName/json-smart-install.sh","""
#!/bin/bash
wget --quiet -O /mnt/driver-daemon/jars/json-smart-1.1.1.jar http://central.maven.org/maven2/net/minidev/json-smart/1.1.1/json-smart-1.1.1.jar
wget --quiet -O /mnt/jars/driver-daemon/json-smart-1.1.1.jar http://central.maven.org/maven2/net/minidev/json-smart/1.1.1/json-smart-1.1.1.jar""", true)
// Create the jwt script.
dbutils.fs.put(s"/databricks/init/$clusterName/jwt-install.sh","""
#!/bin/bash
wget --quiet -O /mnt/driver-daemon/jars/nimbus-jose-jwt-7.0.1.jar http://central.maven.org/maven2/com/nimbusds/nimbus-jose-jwt/7.0.1/nimbus-jose-jwt-7.0.1.jar
wget --quiet -O /mnt/jars/driver-daemon/nimbus-jose-jwt-7.0.1.jar http://central.maven.org/maven2/com/nimbusds/nimbus-jose-jwt/7.0.1/nimbus-jose-jwt-7.0.1.jar""", true)
// Check that the cluster-specific init script exists.
display(dbutils.fs.ls(s"dbfs:/databricks/init/$clusterName/"))
Test Command
// Connect to Azure SQL Database via Active Directory Password Authentication
import com.microsoft.azure.sqldb.spark.config.Config
import com.microsoft.azure.sqldb.spark.connect._
// Get Widget Values
var server = dbutils.widgets.get("server")
var database = dbutils.widgets.get("database")
var table = dbutils.widgets.get("table")
var username = dbutils.widgets.get("user")
var password = dbutils.widgets.get("password")
val config = Config(Map(
"url" -> s"$server.database.windows.net",
"databaseName" -> s"$database",
"dbTable" -> s"$table",
"user" -> s"$username",
"password" -> s"$password",
"authentication" -> "ActiveDirectoryPassword",
"encrypt" -> "true",
"ServerCertificate" -> "false",
"hostNameInCertificate" -> "*.database.windows.net"
))
val collection = sqlContext.read.sqlDB(config)
collection.show()

As a 2020 update:
I did cluster init scripts as described, but in the end my working setup didn't seem to require that.
I ended up using scala 2.11 (note 2.11) and these libraries installed through UI: com.microsoft.azure:azure-sqldb-spark:1.0.2 and mssql_jdbc_8_2_2_jre8.jar (note jre8). Also I had to explicitly mention the driver class in config:
import com.microsoft.azure.sqldb.spark.config.Config
import com.microsoft.azure.sqldb.spark.connect._
val config = Config(Map(
"url" -> "....database.windows.net",
"databaseName" -> "...",
"dbTable" -> "...",
"accessToken" -> "...",
"hostNameInCertificate" -> "*.database.windows.net",
"encrypt" -> "true",
"ServerCertificate" -> "false",
"driver" -> "com.microsoft.sqlserver.jdbc.SQLServerDriver"
))
val collection = spark.read.sqlDB(config)
collection.show()
Token acquisition was done with msal (python):
import msal
TenantId = "...guid..."
authority = "https://login.microsoftonline.com/" + TenantId
scope = "https://database.windows.net//.default" #🤦 yes, with double "//"
ServicePrincipalId = "...guid..."
ServicePrincipalPwd = "secret"
app = msal.ConfidentialClientApplication(client_id=ServicePrincipalId, authority=authority, client_credential=ServicePrincipalPwd, )
result = None
result = app.acquire_token_silent(scopes=[scope], account=None)
if not result:
result = app.acquire_token_for_client(scope)
if "access_token" in result:
sqlAzureAccessToken = result["access_token"]

Related

PowerShell DSC resource MSFT_PackageResource failed: The return code 1618 was not expected. Configuration is likely not correct

I have exe file downloaded in the VM in specific folder, I am trying to install Adobe using Powershell dsc code.
Script is failing with below error during execution (Configuration is called through ARM template), however if I check inside the VM, adobe is installed.
Tried running the same script manually inside the VM. Not facing any error though.
[{"code":"VMExtensionProvisioningError","message":"VM has reported a failure when processing extension 'configureWindowsServer'. Error message: "DSC Configuration 'Adobe' completed with error(s). Following are the first few: PowerShell DSC resource MSFT_PackageResource failed to execute Set-TargetResource functionality with error message: The return code 1618 was not expected. Configuration is likely not correct The SendConfigurationApply function did not succeed."\r\n\r\nMore information on troubleshooting is available at https://aka.ms/VMExtensionDSCWindowsTroubleshoot "}]}
Configuration Adobe
{
$PackagesFolder = "C:\Packages\Adobe"
$AcrobatReader = #{
"Name" = "Adobe Acrobat Reader DC"
"ProductId" = "XXXXXX-XXXXX-XXXXXXX"
"Installer" = "AcroRdrDC.exe"
"FileHash" = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
"HashAlgorithm" = "SHA256"
"DestinationPath" = "$PackagesFolder\AdobeAcrobatReaderDC"
"Arguments" = "/msi EULA_ACCEPT=YES /qn"
}
Package AdobeAcrobatReaderDC {
Ensure = "Present"
Name = $AcrobatReader.Name
ProductId = $AcrobatReader.ProductId
Path = ("{0}\{1}" -f $AcrobatReader.DestinationPath, $AcrobatReader.Installer)
Arguments = $AcrobatReader.Arguments
}
}

How to add service principal to azure databricks workspace using databricks cli from cloud shell

I tried adding service principal to azure databricks workspace using cloud shell but getting error. I am able to look at all the clusters in the work space and I was the one who created that workspace. Do I need to be in admin group if I want to add Service Principal to workspace?
curl --netrc -X POST \ https://xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.net/api/2.0/preview/scim/v2/ServicePrincipals \ --header 'Content-type: application/scim+json' \ --data #create-service-principal.json \ | jq .
file has following info:
{ "displayName": "sp-name", "applicationId": "a9217fxxxxcd-9ab8-dxxxxxxxxxxxxx", "entitlements": [ { "value": "allow-cluster-create" } ], "schemas": [ "urn:ietf:params:scim:schemas:core:2.0:ServicePrincipal" ], "active": true }
Here is the error I am getting: % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 279 100 279 0 0 5166 0 --:--:-- --:--:-- --:--:-- 5264 parse error: Invalid numeric literal at line 2, column 0
Do I need to be in admin group if I want to add Service Principal to workspace?
Issue is with JSON file not with access to admin group.
You need to check double quotes in line number 2 of your JSON file.
You can refer this github link
Try this code in Python that you can run in a Databricks notebook:
import pandas
import json
import requests
# COMMAND ----------
# MAGIC %md ### define variables
# COMMAND ----------
pat = 'EnterPATHere' # paste PAT. Get it from settings > user settings
workspaceURL = 'EnterWorkspaceURLHere' # paste the workspace url in the format of 'https://adb-1234567.89.azuredatabricks.net'
applicationID = 'EnterApplicationIDHere' # paste ApplicationID / ClientID of Service Principal. Get it from Azure Portal
friendlyName = 'AddFriendlyNameHere' # paste FriendlyName of ServicePrincipal. Get it from Azure Portal
# COMMAND ----------
# MAGIC %md ### add service principal
# COMMAND ----------
payload_raw = {
'schemas':
['urn:ietf:params:scim:schemas:core:2.0:ServicePrincipal'],
'applicationId': applicationID,
'displayName': friendlyName,
'groups':[],
'entitlements':[]
}
payload = json.loads(json.dumps(payload_raw))
response = requests.post(workspaceURL + '/api/2.0/preview/scim/v2/ServicePrincipals',\
headers = {'Authorization' : 'Bearer '+ pat,\
'Content-Type': 'application/scim+json'},\
data=json.dumps(payload))
response.content
I have actually published a blog post where a Python script is provided to fully manage service principals and access control in Databricks workspaces.

How to abstract DB connection in AWS Lambda?

I'm building an application in AWS with quite a few Lambda functions to write - and all of them will create a database instance to query on, by running the following code:
mydb = mysql.connector.connect(
host="endpoint.rds.amazonaws.com",
user="user",
passwd="password",
database="dbname"
)
Now, I don't want to have to explicitly include this code in every single Lambda function - I'd rather put it somewhere else (either in a Layer or a separate Lambda function), so that this could be done simply by something like this:
mydb = ConnectToDB()
Any thoughts on how to do this?
You have the right idea, assuming you are using python i would create a layer package similar to the following:
python/
myPackage.py
mysql/
Where mysql includes the mysql package and myPackage.py includes some variation of:
import mysql
def ConnectToDB(**kwargs):
return mysql.connector.connect(
host=kwargs.get("YOUR_ENDPOINT"),
user=kwargs.get("YOUR_USER"),
passwd=kwargs.get("YOUR_PASSWORD"),
database=kwargs.get("YOUR_DBNAME")
)
Then use this script to create a layer in lambda:
#!/bin/bash
#Required variables
LAYER_NAME="YOUR_LAYER_NAME"
LAYER_DESCRIPTION="YOUR_LAYER_DESCRIPTION"
LAYER_RUNTIMES="python3.6 python3.7"
S3_BUCKET="YOUR_S3_BUCKET"
#Zip Package Files
zip -r ${LAYER_NAME}.zip .
echo "Zipped ${LAYER_NAME}"
#Upload Package to Lambda
aws s3 cp ./${LAYER_NAME}.zip s3://${S3_BUCKET}
#Create new layer
aws lambda publish-layer-version --layer-name ${LAYER_NAME} --description "${LAYER_DESCRIPTION}" --content S3Bucket=${S3_BUCKET},S3Key=${LAYER_NAME}.zip --compatible-runtimes ${LAYER_RUNTIMES}
#Cleanup zip files
rm ${LAYER_NAME}.zip
you can then associate the layer with your lambda function, and import your package in the lambda using the following syntax:
from myPackage import ConnectToDB
connectionParams = {
"YOUR_ENDPOINT" : ...,
"YOUR_USER": ...,
"YOUR_PASSWORD": ...,
"YOUR_DBNAME": ...
}
mydb = ConnectToDB(**connectionParams)
Solved! I created a python file called DBConnections.py, which includes the function below - and I included it in the deployment package for my AWS Lambda Layer.
def Connect():
mydb = mysql.connector.connect(
host="endpoint.amazonaws.com",
user="user",
passwd="password",
database="mydbname"
)
return mydb
After it was deployed, the only thing I have to do to invoke this is:
from DBConnections import Connect
mydb = Connect()
VoilĂ .

'No such file or directory' error when using buildGoPackage in nix

I'm trying to build the hasura cli: https://github.com/hasura/graphql-engine/tree/master/cli with the following code (deps derived from dep2nix):
{ buildGoPackage, fetchFromGitHub }:
buildGoPackage rec {
version = "1.0.0-beta.2";
name = "hasura-${version}";
goPackagePath = "github.com/hasura/graphql-engine";
subPackages = [ "cli" ];
src = fetchFromGitHub {
owner = "hasura";
repo = "graphql-engine";
rev = "v${version}";
sha256 = "1b40s41idkp1nyb9ygxgsvrwv8rsll6dnwrifpn25bvnfk8idafr";
};
goDeps = ./deps.nix;
}
but I get the following errors after the post-installation fixup step:
find: '/nix/store/gkck68cm2z9k1qxgmh350pq3kwsbyn8q-hasura-cli-1.0.0-beta.2': No such file or directory.
What am I doing wrong here? For reference, I'm on macOS and using home-manager.
For anyone still wondering:
There are a couple of things to consider:
dep has been deprecated in favor of go modules
This is also reflected in Nix, as buildGoPackage is now legacy and moved to buildGoModule
There is already a hasura-cli package in nixpkgs. You can just use it with nix-shell -p hasura-cli

I cannot install MySQL JDBC driver on Ubuntu 16.04

I downloaded the MySQL JDBC driver to connect Scala to a MySQL database, but I cannot install it with java -jar mysql-connector-java-5.1.45-bin.jar. It says, no main manifest attribute, in mysql-connector-java-5.1.45-bin.jar
I cannot find any MANIFEST.MF file.
Any help would be appreciated.
EDIT: warnings after I run sbt run
[info] Loading project definition from /home/alessandro/Scala/tests/project
[info] Loading settings from build.sbt ...
[info] Set current project to MyProject (in build file:/home/alessandro/Scala/tests/)
[info] Running tests.ScalaJdbcConnectSelect
Sun Feb 18 21:51:33 CET 2018 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'pmanager.user' doesn't exist
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:488)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:425)
at com.mysql.jdbc.Util.getInstance(Util.java:408)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:944)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3973)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3909)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2527)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2680)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2480)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2438)
at com.mysql.jdbc.StatementImpl.executeQuery(StatementImpl.java:1381)
at tests.ScalaJdbcConnectSelect$.delayedEndpoint$tests$ScalaJdbcConnectSelect$1(ScalaJdbcConnectSelect.scala:16)
at tests.ScalaJdbcConnectSelect$delayedInit$body.apply(ScalaJdbcConnectSelect.scala:5)
at scala.Function0.apply$mcV$sp(Function0.scala:34)
at scala.Function0.apply$mcV$sp$(Function0.scala:34)
at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
at scala.App.$anonfun$main$1$adapted(App.scala:76)
at scala.collection.immutable.List.foreach(List.scala:389)
at scala.App.main(App.scala:76)
at scala.App.main$(App.scala:74)
at tests.ScalaJdbcConnectSelect$.main(ScalaJdbcConnectSelect.scala:5)
at tests.ScalaJdbcConnectSelect.main(ScalaJdbcConnectSelect.scala)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at sbt.Run.invokeMain(Run.scala:93)
at sbt.Run.run0(Run.scala:87)
at sbt.Run.execute$1(Run.scala:65)
at sbt.Run.$anonfun$run$4(Run.scala:77)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
at sbt.util.InterfaceUtil$$anon$1.get(InterfaceUtil.scala:10)
at sbt.TrapExit$App.run(TrapExit.scala:252)
at java.base/java.lang.Thread.run(Thread.java:844)
[success] Total time: 3 s, completed Feb 18, 2018, 9:51:34 PM
UPDATE:
I found two options:
The first
In your build.sbt replace line
libraryDependencies += "mysql" % "mysql-connector-java" % "5.1.24"
with this:
libraryDependencies ++= Seq(
"mysql" % "mysql-connector-java" % "5.1.16"
)
In command line:
sbt compile
sbt run
The second
Use this in command line:
set fullClasspath in Compile += Attributed.blank(file("path-to-your-connector-jar"))
sbt compile
sbt run
After that, the problem with the driver should disappear, but check the data to connect to the database.
BEFORE UPDATE:
In your project on Scala you also can use this:
package tests
import java.sql.{Connection,DriverManager}
object ScalaJdbcConnectSelect extends App {
// connect to the database named "mysql" on port 8889 of localhost
val url = "jdbc:mysql://localhost:8889/mysql"
val driver = "com.mysql.jdbc.Driver"
val username = "root"
val password = "root"
var connection:Connection = _
try {
Class.forName(driver)
connection = DriverManager.getConnection(url, username, password)
val statement = connection.createStatement
val rs = statement.executeQuery("SELECT host, user FROM user")
while (rs.next) {
val host = rs.getString("host")
val user = rs.getString("user")
println("host = %s, user = %s".format(host,user))
}
} catch {
case e: Exception => e.printStackTrace
}
connection.close
}
Link to source 1
Link to source 2
Please read this line:
com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'pmanager.user' doesn't exist
Check if there is such a table. If so, leave characters after the point (user).

Resources