i use c3po to connect jdbc(impala),but it failed.Could you give me help:)
ConnectPool.scala
class ConnectPool extends Serializable{
private val cpds: ComboPooledDataSource = new ComboPooledDataSource(true)
private val conf = Utils.getPropmap("env.properties")
try {
cpds.setJdbcUrl(conf("kudu.produce.url"))
cpds.setDriverClass(conf("jdbc.driver"))
cpds.setMaxPoolSize(400)
cpds.setMinPoolSize(20)
cpds.setAcquireIncrement(5)
cpds.setMaxStatements(380)
} catch {
case e: Exception => e.printStackTrace()
}
def getConnection: Connection = {
try {
return cpds.getConnection()
} catch {
case ex: Exception =>
ex.printStackTrace()
null
}
}
}
object ConnectManager {
var kuduManager: ConnectPool = _
def getConnectManager: ConnectPool = {
synchronized {
if (kuduManager == null) {
kuduManager = new ConnectPool
}
}
kuduManager
}
}
main.scala
messages.foreachRDD(rdd => {
val conn = ConnectManager.getConnectManager.getConnection
val stmt = conn.createStatement
if(!rdd.isEmpty() && rdd.count() >0){
//初始化spark
val spark = SparkSession.builder.config(rdd.sparkContext.getConf).getOrCreate()
try{
// use stmt
}catch {
case e: Exception => print("\ntest\n")
} finally {
stmt.close()
conn.close()}
}
})
out put
18/03/16 16:56:00 INFO c3p0.SQLWarnings:
[Simba]ImpalaJDBCDriver Error setting default connection
property values: {0} java.sql.SQLWarning:
[Simba]ImpalaJDBCDriver Error setting default connection
property values: {0}
at com.cloudera.jdbc.common.SWarningListener.createSQLWarning(Unknown
Source)
at com.cloudera.jdbc.common.SWarningListener.postWarning(Unknown Source)
at com.cloudera.jdbc.common.SConnection.(Unknown Source)
at com.cloudera.jdbc.common4.C4SConnection.(Unknown Source)
at com.cloudera.jdbc.jdbc41.S41Connection.(Unknown Source)
at com.cloudera.impala.jdbc41.ImpalaJDBC41Connection.(Unknown
Source)
at com.cloudera.impala.jdbc41.ImpalaJDBC41ObjectFactory.createConnection(Unknown
Source)
at com.cloudera.jdbc.common.BaseConnectionFactory.doConnect(Unknown
Source)
at com.cloudera.jdbc.common.AbstractDriver.connect(Unknown Source)
at com.mchange.v2.c3p0.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:119)
at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:143)
at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:132
The output you are seeing is a warning, not an error. c3p0 resets and prints any warnings that a Connection experiences prior to checking them back into the pool. This surprises developers sometimes, as very few developers bother to check for warnings, and so they often go unnoticed.
I don't know what this warning means, exactly. But JDBC Connection warnings often occur for nonserious conditions. Is your application working properly, besides the warning output?
If you decide you can live with the warnings, you can redirect them or shut them down by configuring the special logger com.mchange.v2.c3p0.SQLWarnings. Setting the log level of this logger to anything more severe than INFO will prevent the appearance of these messages.
Related
In okhttp's code, connection.idleAtNs has been assigned in releaseConnectionNoEvents() method,
internal fun releaseConnectionNoEvents(): Socket? {
val connection = this.connection!!
connection.assertThreadHoldsLock()
val calls = connection.calls
val index = calls.indexOfFirst { it.get() == this#RealCall }
check(index != -1)
calls.removeAt(index)
this.connection = null
if (calls.isEmpty()) {
connection.idleAtNs = System.nanoTime()
if (connectionPool.connectionBecameIdle(connection)) {
return connection.socket()
}
}
return null
}
but why Re-assignment here
private fun pruneAndGetAllocationCount(connection: RealConnection, now: Long): Int {
connection.assertThreadHoldsLock()
val references = connection.calls
var i = 0
while (i < references.size) {
val reference = references[i]
if (reference.get() != null) {
i++
continue
}
// We've discovered a leaked call. This is an application bug.
val callReference = reference as CallReference
val message = "A connection to ${connection.route().address.url} was leaked. " +
"Did you forget to close a response body?"
Platform.get().logCloseableLeak(message, callReference.callStackTrace)
references.removeAt(i)
connection.noNewExchanges = true
// If this was the last allocation, the connection is eligible for immediate eviction.
if (references.isEmpty()) {
connection.idleAtNs = now - keepAliveDurationNs
return 0
}
}
return references.size
}
If assign a value here, it may appear that the connection is removed as soon as it becomes idle.
The pruneAndGetAllocationCount method is called as part of a cleanup task. At the point you have linked to, the code has already logged a warning about a connection leak. The code releases that connection then considders the connection to be immediately available for release.
If you have concerns about the code, consider making a test case that shows what you would propose changing the code to, it might be possible to improve it.
If it's affecting you because of leaked connections, you should instead fix the bug in the application code.
// We've discovered a leaked call. This is an application bug.
val callReference = reference as CallReference
val message = "A connection to ${connection.route().address.url} was leaked. " +
"Did you forget to close a response body?"
I'm using the basic mstor logic with version 1.0.0 of of the maven mstor library, but it's throwing an Exception on the inbox.close() method. Note: I am not doing any writing to the disk so this Exception is odd. I made, as an attempt, my class which calls this code to "implement Serializable", but that did not help.
This code is running from a SpringBoot REST Service.
If I don't do an inbox.close(), then on Windows the mbox file is still open (not released by this library) after the method below completes.
Here's the basic code:
Properties properties = new Properties();
properties.setProperty("mail.store.protocol", "mstor");
properties.setProperty("mstor.mbox.metadataStrategy", "none");
properties.setProperty("mstor.mbox.cacheBuffers", "disabled");
properties.setProperty("mstor.mbox.bufferStrategy", "mapped");
properties.setProperty("mstor.metadata", "disabled");
properties.setProperty("mstor.mozillaCompatibility", "enabled");
Session session = Session.getInstance(properties);
try {
store = session.getStore(new URLName("mstor:" + pathToMboxFile));
store.connect();
inbox = (MStorFolder) store.getDefaultFolder();
inbox.open(Folder.READ_ONLY);
Message[] messages = inbox.getMessages();
int bodyPartCount = 0;
// ***********************
// process all mbox data.
// *************************
for (int pos = 0; pos < messages.length; pos++)
{
// processing.
}
catch (NoSuchProviderException e)
{
log.debug("MboxController NoSuchProviderException Exception: " + e.getMessage());
}
catch (javax.mail.MessagingException e)
{
errors.append(e);
log.debug("MboxController MessagingException Exception: " + e.getMessage());
}
finally
{
// close the mbox store
try
{
inbox.close(false);
store.close();
}
catch (MessagingException e)
{
log.debug("MboxController Closing Store Exception: " + e.getMessage());
}
}
Now, although the code does work and returns the mbox text and closes the file, when the inbox.close(false) runs, I get this error stack (or something close to it) each time in the Tomcat log:
2019-03-31 08:06:09 - Disk Write of 191 failed:
"java.io.NotSerializableException: net.fortuna.mstor.data.MessageInputStream
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184)
at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
at java.io.ObjectOutputStream.defaultWriteObject(ObjectOutputStream.java:441)
at net.sf.ehcache.Element.writeObject(Element.java:791)
at sun.reflect.GeneratedMethodAccessor88.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:1140)
at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1496)
at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
at net.sf.ehcache.util.MemoryEfficientByteArrayOutputStream.serialize(MemoryEfficientByteArrayOutputStream.java:97)
at net.sf.ehcache.store.disk.DiskStorageFactory.serializeElement(DiskStorageFactory.java:413)
at net.sf.ehcache.store.disk.DiskStorageFactory.write(DiskStorageFactory.java:392)
at net.sf.ehcache.store.disk.DiskStorageFactory$DiskWriteTask.call(DiskStorageFactory.java:493)
at net.sf.ehcache.store.disk.DiskStorageFactory$PersistentDiskWriteTask.call(DiskStorageFactory.java:1151)
at net.sf.ehcache.store.disk.DiskStorageFactory$PersistentDiskWriteTask.call(DiskStorageFactory.java:1135)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
I'm trying to use Databroker of MD-SAL to save a list of data, after modifying the yang file and InstanceIdentifier many times but always facing similar validation issue, for example
java.util.concurrent.ExecutionException: TransactionCommitFailedException{message=canCommit encountered an unexpected failure, errorList=[RpcError [message=canCommit encountered an unexpected failure, severity=ERROR, errorType=APPLICATION, tag=operation-failed, applicationTag=null, info=null, cause=org.opendaylight.yangtools.yang.data.impl.schema.tree.SchemaValidationFailedException: Child /(urn:opendaylight:params:xml:ns:yang:testDataBroker?revision=2015-01-05)service-datas is not present in schema tree.]]}
at org.opendaylight.yangtools.util.concurrent.MappingCheckedFuture.wrapInExecutionExc
My goal is to use rpc save-device-info to get data from rest. Then use databroker api to save data in the memory and finally test if the data could be succesfully replicated into other cluster nodes.
Yang file:
module testDataBroker {
yang-version 1;
namespace "urn:opendaylight:params:xml:ns:yang:testDataBroker";
prefix "testDataBroker";
revision "2015-01-05" {
description "Initial revision of testDataBroker model";
}
container service-datas {
list service-data {
key "service-id";
uses service-id;
uses device-info;
}
}
grouping device-info {
container device-info {
leaf device-name {
type string;
config false;
}
leaf device-description {
type string;
config false;
}
}
}
grouping service-id {
leaf service-id {
type string;
mandatory true;
}
}
rpc save-device-info {
input {
uses service-id;
uses device-info;
}
output {
uses device-info;
}
}
rpc get-device-info {
output {
uses device-info;
}
}
}
Java Code
#Override public Future<RpcResult<SaveDeviceInfoOutput>> saveDeviceInfo(SaveDeviceInfoInput input) {
String name = input.getDeviceInfo().getDeviceName();
String description = input.getDeviceInfo().getDeviceDescription();
String serviceId = input.getServiceId();
WriteTransaction writeTransaction = dataBroker.newWriteOnlyTransaction();
DeviceInfo deviceInfo = new DeviceInfoBuilder().setDeviceDescription(description).setDeviceName(name).build();
ServiceData serviceData = new ServiceDataBuilder().setServiceId(serviceId).setDeviceInfo(deviceInfo).build();
InstanceIdentifier<ServiceData> instanceIdentifier =
InstanceIdentifier.builder(ServiceDatas.class).child(ServiceData.class, serviceData.getKey()).build();
writeTransaction.put(LogicalDatastoreType.CONFIGURATION, instanceIdentifier, serviceData, true);
boolean isFailed = false;
try {
writeTransaction.submit().get();
log.info("Create containers succeeded!");
} catch (InterruptedException | ExecutionException e) {
log.error("Create containers failed: ", e);
isFailed = true;
}
return isFailed ?
RpcResultBuilder.success(new SaveDeviceInfoOutputBuilder())
.withError(RpcError.ErrorType.RPC, "Create container failed").buildFuture() :
RpcResultBuilder.success(new SaveDeviceInfoOutputBuilder().setDeviceInfo(input.getDeviceInfo()))
.buildFuture();
}
Really need your help. Thanks.
Update:
With the same version of md-sal bundles, I installed feature odl-toaster on only one ODL instead of cluster nodes. It seems like rpc from odl-toaster is working properly on single node.
I didn't realize that rpc is also clustered. Sometimes the rpc request hit on other nodes which didn't deploy the same bundles. Now the problem has been solved after the bundle is distrubted on each node.
I am trying to get this data http://stream.meetup.com/2/rsvps into spark stream
They are JSON objects, I know the lines will be strings, I just want it to work before I try JSON.
I am not sure what to put as the port, I assume that is the problem.
SparkConf conf = new SparkConf().setMaster("local[2]").setAppName("Spark Streaming");
JavaStreamingContext jssc = new JavaStreamingContext(conf, Durations.seconds(1));
JavaReceiverInputDStream<String> lines = jssc.socketTextStream("http://stream.meetup.com/2/rsvps", 80);
lines.print();
jssc.start();
jssc.awaitTermination();
Here is my error
java.net.UnknownHostException: http://stream.meetup.com/2/rsvps
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:178)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:172)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at java.net.Socket.<init>(Socket.java:425)
at java.net.Socket.<init>(Socket.java:208)
The socketTextStream isn't designed to work as an http client. As you noticed, you will need to create a custom receiver, one potential place to start is based on the receiver created as part of the meetup streaming data source (see https://github.com/actions/meetup-stream/blob/master/src/main/scala/receiver/MeetupReceiver.scala ).
Here is a custom UrlReceiver which follows spark documentation on custom receivers:
class UrlReceiver(urlStr: String) extends Receiver[String](StorageLevel.MEMORY_AND_DISK_2) with Logging {
override def onStart() = {
new Thread("Url Receiver") {
override def run() = {
val urlConnection: URLConnection = new URL(urlStr).openConnection
val bufferedReader: BufferedReader = new BufferedReader(
new InputStreamReader(urlConnection.getInputStream)
)
var msg = bufferedReader.readLine
while (msg != null) {
if (!msg.isEmpty) {
store(msg)
}
msg = bufferedReader.readLine
}
bufferedReader.close()
}
}.start()
}
override def onStop() = {
// nothing to do
}
}
Then use it like this:
val lines = sc.receiverStream(new UrlReceiver("http://stream.meetup.com/2/rsvps"))
When I run my code I get:
Breaking on exception: String expected
What I am trying to do is connect to my server using a websocket. However, it seems that no matter if my server is online or not the client still crashes.
My code:
import 'dart:html';
WebSocket serverConn;
int connectionAttempts;
TextAreaElement inputField = querySelector("#inputField");
String key;
void submitMessage(Event e) {
if (serverConn.readyState == WebSocket.OPEN) {
querySelector("#chatLog").text = inputField.value;
inputField.value = "";
}
}
void recreateConnection(Event e) {
connectionAttempts++;
if (connectionAttempts <= 5) {
inputField.value = "Connection failed, reconnecting. Attempt" + connectionAttempts.toString() + "out of 5";
serverConn = new WebSocket("ws://127.0.0.1:8887");
serverConn.onClose.listen(recreateConnection);
serverConn.onError.listen(recreateConnection);
} else {
inputField.value = "Connections ran out, please refresh site";
}
}
void connected(Event e) {
serverConn.sendString(key);
if (serverConn.readyState == WebSocket.OPEN) {
inputField.value = "CONNECTED!";
inputField.readOnly = false;
}
}
void main() {
serverConn = new WebSocket("ws://127.0.0.1:8887");
serverConn.onClose.listen(recreateConnection);
serverConn.onError.listen(recreateConnection);
serverConn.onOpen.listen(connected);
//querySelector("#inputField").onInput.listen(submitMessage);
querySelector("#sendInput").onClick.listen(submitMessage);
}
My Dart Editor says nothing about where the problem comes from nor does it give any warning until run-time.
You need to initialize int connectionAttempts; with a valid value;
connectionAttempts++; fails with an exception on null.
You also need an onMessage handler to receive messages.
serverConn.onMessage.listen((MessageEvent e) {
recreateConnection should register an onOpen handler as well.
After serverConn = new WebSocket the listener registered in main() will not work
If you register a listener where only one single event is expected you can use first instead of listen
serverConn.onOpen.first.then(connected);
According to #JAre s comment.
Try to use a hardcoded string
querySelector("#chatLog").text = 'someValue';
to ensure this is not the culprit.