I was trying to create a storage plugin configuration in apache-drill ( 1.6 ) for oracle jdbc. I have already copied ojdbc7.jar within the apache-drill-1.6.0/jars/3rdparty directory.
But am getting an error
Please retry: error (unable to create/ update storage)
while trying to create the storage plugin !
Here is the storage plugin configuration:
{
"type": "jdbc",
"driver": "oracle.jdbc.OracleDriver",
"url": "jdbc:oracle:thin:username/password#xx.xx.xx.xx:1521:***",
"enabled": true
}
& here is the drill-override.conf file :
drill.exec: {
cluster-id: "drillbits1",
zk.connect: "localhost:2181",
sys.store.provider.local.path="/data/drill"
}
I have restarted drill after copying ojdbc7.jar file within the drill 3rd party directory !
I found some similar issues here in stackoverflow .. ( storage_plugin failure & drill-1.3&Oracle jdbc ) But nothing worked for me !!
Do you have any idea about this ?
[ I am using apache-drill version 1.6 in distributed mode , centOS 7 & java_version 1.8 ]
I solved this issue by myself. Here is the changes that I did to resolve this one :
I changed my storage plugin configuration ( according to this POST )
{
"type": "jdbc",
"driver": "oracle.jdbc.driver.OracleDriver",
"url": "jdbc:oracle:thin:#<IP>:<PORT>:<SID>",
"username": "<USERNAME>",
"password": "<PASSWORD>",
"enabled": true
}
Just changed drill-override.conf
drill.exec: {
cluster-id: "drillbits1",
zk.connect: "localhost:2181",
drill.exec.sys.store.provider.local.path="/data/drill"
}
3 . Also, I replaced ojdbc7.jar with ojdbc6.jar ( I was trying to connect oracle DB version. 11.2.0.4 -
Related
I am trying to deploy debezium CDC connector to capture data from Oracle DB. We have prepared the Oracle DB and Xstream connector.
I am getting following exception:
{
"name": "test-debezium-2",
"connector": {
"state": "RUNNING",
"worker_id": "10.230.24.80:8084"
},
"tasks": [
{
"id": 0,
"state": "FAILED",
"worker_id": "10.230.24.80:8084",
"trace": "org.apache.kafka.connect.errors.ConnectException: An exception ocurred in the change event producer. This connector will be stopped.\n\tat
io.debezium.connector.base.ChangeEventQueue.throwProducerFailureIfPresent(ChangeEventQueue.java:170)\n\tat
io.debezium.connector.base.ChangeEventQueue.poll(ChangeEventQueue.java:151)\n\tat
io.debezium.connector.oracle.OracleConnectorTask.poll(OracleConnectorTask.java:110)\n\tat
org.apache.kafka.connect.runtime.WorkerSourceTask.poll(WorkerSourceTask.java:259)\n\tat
org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:226)\n\tat
org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)\n\tat
org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)\n\tat
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)\n\tat
java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat
java.base/java.lang.Thread.run(Thread.java:834)\n
Caused by: java.lang.UnsatisfiedLinkError: no ocijdbc11 in java.library.path: [/usr/java/packages/lib, /usr/lib64, /lib64, /lib, /usr/lib]\n\tat
java.base/java.lang.ClassLoader.loadLibrary(ClassLoader.java:2660)\n\tat
java.base/java.lang.Runtime.loadLibrary0(Runtime.java:829)\n\tat
java.base/java.lang.System.loadLibrary(System.java:1867)\n\tat
oracle.jdbc.driver.T2CConnection$1.run(T2CConnection.java:3541)\n\tat
java.base/java.security.AccessController.doPrivileged(Native Method)\n\tat
oracle.jdbc.driver.T2CConnection.loadNativeLibrary(T2CConnection.java:3537)\n\tat
oracle.jdbc.driver.T2CConnection.logon(T2CConnection.java:269)\n\tat
oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnection.java:553)\n\tat
oracle.jdbc.driver.T2CConnection.<init>(T2CConnection.java:165)\n\tat
oracle.jdbc.driver.T2CDriverExtension.getConnection(T2CDriverExtension.java:53)\n\tat
oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:528)\n\tat
java.sql/java.sql.DriverManager.getConnection(DriverManager.java:677)\n\tat
java.sql/java.sql.DriverManager.getConnection(DriverManager.java:228)\n\tat
io.debezium.connector.oracle.OracleConnectionFactory.connect(OracleConnectionFactory.java:25)\n\tat
io.debezium.jdbc.JdbcConnection.connection(JdbcConnection.java:756)\n\tat
io.debezium.jdbc.JdbcConnection.connection(JdbcConnection.java:751)\n\tat
io.debezium.connector.oracle.OracleConnection.setSessionToPdb(OracleConnection.java:46)\n\tat
io.debezium.connector.oracle.OracleSnapshotChangeEventSource.prepare(OracleSnapshotChangeEventSource.java:70)\n\tat
io.debezium.relational.RelationalSnapshotChangeEventSource.execute(RelationalSnapshotChangeEventSource.java:104)\n\tat
io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:83)\n\t... 5 more\n"
}
],
"type": "source"
}
I have verified the /usr/lib folder has the ocijdbc11.dll file. I have tried copying the file to /lib as well but I am getting the same error.
I encountered this problem.
First go to oracle download 'instantclient'
instantclient download
unzip------Believe you have copied the ojdbc.jar and xtream.jar to kafka lib.
Then Adding environment variables
vim /etc/profile
add:
exprot LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/you download file addr/libocijdbc11.so
save----wq!
ohhhhhh, remember --- resoure /etc/profile
run again
I am making a new composer-wallet with composer 0.19.0
All test passed fine - test based on composer-wallet-filesystem
I can successfully import business network cards to the new wallet and use them for transactions.
I am only one issue
$ composer card list
Error: Can't find end of central directory : is this a zip file ? If it is, see http://stuk.github.io/jszip/documentation/howto/read_zip.html
Command failed
I tryed to update jszip to the lastest version in composer-cli, but same problem
Here is the environment variable to configure the connection
export NODE_CONFIG='{
"composer": {
"wallet": {
"type": "composer-wallet-mongodb",
"desc": "Uses a local mongodb instance",
"options": {
"uri": "mongodb://localhost:27017/yourCollection",
"collectionName": "myWallet",
"options": {
}
}
}
}
}'
Any help is welcomed
Hi I am unable to connect mysql to apache drill on vm.
I have used jdbc connector 5.1.45 and mysql version is 5.7.20.
Also in VM its giving warning:
Download mysql-connector-java-5.1.37-bin.jar connector and place it in jars/3rdparty/ folder
Modify conf/drill-override.conf to add the below line before }:
drill.exec.sys.store.provider.local.path = "mysql-connector-java-5.1.37-bin.jar"
Restart drill and try again.
Storage plugin should look something like this:
{
"type": "jdbc",
"driver": "com.mysql.jdbc.Driver",
"url": "jdbc:mysql://localhost:3306",
"username": "username",
"password": "password",
"enabled": true
}
My cluster Config file as follows
`
{
"name": "SampleCluster",
"clusterConfigurationVersion": "1.0.0",
"apiVersion": "01-2017",
"nodes":
[
{
"nodeName": "vm0",
"iPAddress": "here is my VPS ip",
"nodeTypeRef": "NodeType0",
"faultDomain": "fd:/dc1/r0",
"upgradeDomain": "UD0"
},
{
"nodeName": "vm1",
"iPAddress": "here is my another VPS ip",
"nodeTypeRef": "NodeType0",
"faultDomain": "fd:/dc1/r1",
"upgradeDomain": "UD1"
},
{
"nodeName": "vm2",
"iPAddress": "here is my another VPS ip",
"nodeTypeRef": "NodeType0",
"faultDomain": "fd:/dc1/r2",
"upgradeDomain": "UD2"
}
],
"properties": {
"reliabilityLevel": "Bronze",
"diagnosticsStore":
{
"metadata": "Please replace the diagnostics file share with an actual file share accessible from all cluster machines.",
"dataDeletionAgeInDays": "7",
"storeType": "FileShare",
"IsEncrypted": "false",
"connectionstring": "c:\\ProgramData\\SF\\DiagnosticsStore"
},
"nodeTypes": [
{
"name": "NodeType0",
"clientConnectionEndpointPort": "19000",
"clusterConnectionEndpointPort": "19001",
"leaseDriverEndpointPort": "19002",
"serviceConnectionEndpointPort": "19003",
"httpGatewayEndpointPort": "19080",
"reverseProxyEndpointPort": "19081",
"applicationPorts": {
"startPort": "20001",
"endPort": "20031"
},
"isPrimary": true
}
],
"fabricSettings": [
{
"name": "Setup",
"parameters": [
{
"name": "FabricDataRoot",
"value": "C:\\ProgramData\\SF"
},
{
"name": "FabricLogRoot",
"value": "C:\\ProgramData\\SF\\Log"
}
]
}
]
}
}
It is almost identical to standalone service fabric download demo file for untrusted cluster except my VPS ip. I enabled remote registry service.I ran the
\TestConfiguration.ps1 -ClusterConfigFilePath \ClusterConfig.Unsecure.MultiMachine.json but i got the following error.
Unable to change open service manager handle because 5
Unable to query service configuration because System.InvalidOperationException: Unable to change open service manager ha
ndle because 5
at System.Fabric.FabricDeployer.FabricDeployerServiceController.GetServiceStartupType(String machineName, String serv
iceName)
Querying remote registry service on machine <IP Address> resulted in exception: Unable to change open service manager
handle because 5.
Unable to change open service manager handle because 5
Unable to query service configuration because System.InvalidOperationException: Unable to change open service manager ha
ndle because 5
at System.Fabric.FabricDeployer.FabricDeployerServiceController.GetServiceStartupType(String machineName, String serv
iceName)
Querying remote registry service on machine <Another IP Address> resulted in exception: Unable to change open service manager
handle because 5.
Best Practices Analyzer determined environment has an issue. Please see additional BPA log output in DeploymentTraces
LocalAdminPrivilege : True
IsJsonValid : True
IsCabValid :
RequiredPortsOpen : True
RemoteRegistryAvailable : False
FirewallAvailable :
RpcCheckPassed :
NoConflictingInstallations :
FabricInstallable :
DataDrivesAvailable :
Passed : False
Test Config failed with exception: System.InvalidOperationException: Best Practices Analyzer determined environment has
an issue. Please see additional BPA log output in DeploymentTraces folder.
at System.Management.Automation.MshCommandRuntime.ThrowTerminatingError(ErrorRecord errorRecord)
I don't understand the problem.VPSs are not locally connected. All are public IP.I don't know, this may b an issue. how do I make virtual LAN among these VPS?Can anyone give me some direction about this error?Anyone helps me is greatly appreciated.
Edit: I used VM term insted of VPS.
Finally I make this working. Actually all the nodes are in a network, i thought it wasn't. I enable file sharing. I try to access the shared file from the node where I ran configuration test to the all other nodes. I have to give the credentials of logins. And then it works like a charm.
I know pebble 2.x is a bit outdated, but that's the watch that I recently got and am interested in writing a small app on this.
I am unable to load the resource files (images and font) in my pebble app. Below is the error message when I try to run pebble build:
Setting top to : /home/mixi/Documents/pebble-dev/helloworld
Setting out to : /home/mixi/Documents/pebble-dev/helloworld/build
Checking for program gcc,cc : arm-none-eabi-gcc
Checking for program ar : arm-none-eabi-ar
Found Pebble SDK in : /home/mixi/.pebble-sdk/SDKs/current/sdk-core/pebble/aplite
'configure' finished successfully (0.220s)
Waf: Entering directory `/home/mixi/Documents/pebble-dev/helloworld/build'
Error Generating Resources: File: bt-icon.png has specified invalid type: bitmap
Must be one of (raw, png, png-trans, font)
Generating resources failed
My appinfo.json:
{
"uuid": "93c49fe2-0b1e-44b8-8fff-22d9c87adab9",
"shortName": "helloworld",
"longName": "helloworld",
"companyName": "MakeAwesomeHappen",
"versionLabel": "1.0",
"sdkVersion": "2.9",
"targetPlatforms": ["aplite", "basalt", "chalk"],
"watchapp": {
"watchface": true
},
"appKeys": {
"dummy": 0
},
"resources": {
"media": [
{
"type": "bitmap",
"name": "IMAGE_BT_ICON",
"file": "bt-icon.png"
}
]
},
"versionCode": 1
}
My Pebble version:
Pebble Tool v4.0 (active SDK: v2.9)
I also tried creating a test app on pebblecloud with their sample. The sample runs fine without resource, but also fails when I add a new resource to the project. Is there a fix to this?
Apparently, pebble 2.x does not like "bitmap" type. All I had to do was change resource type to "png" in the JSON file:
"media": [
{
"type": "png",
"name": "IMAGE_BT_ICON",
"file": "bt-icon.png"
}
]
OMG =A=..
As for CloudPebble, the drop down for Resource Type doesn't seem to show up and it seems to select Bitmap by default.. Maybe it'll be fixed in the near future idk.