Opendaylight : RemoteDevice : No more sources for schema context - opendaylight

hello im trying to connect cisco CSR1000v to opendaylight controller
im using opendaylight 0.5.2 Boron SR2 and cisco CSR1000v 15.03.05
i follow the instruction in the opendaylight netconf guide
http://docs.opendaylight.org/en/stable-boron/user-guide/netconf-user-guide.html
i sucessfully add the device, but when i check, its error
"node": [
{
"node-id": "CSR1K",
"netconf-node-topology:connection-status": "unable-to-connect",
"netconf-node-topology:connected-message": "RemoteDevice{CSR1K}: No more sources for schema context"
},
anyone know what happen? thanks before

Related

JDBC connection fails to SAP Hana Cloud: "RTEException: -708 Receive of connect failed."

Similar to https://answers.sap.com/questions/12675326/sap-dbtech-jdbc-708-receive-of-connect-fail.html
Connection to SAP Hana Cloud fails with the error "RTEException: -708 Receive of connect failed."
Steps to reproduce:
java -version
OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.20.04.1)
java -jar ngdbc.jar -V
package package com.sap.db.jdbc, Java Platform API Specification, version 1.4, SAP HANA JDBC Driver, SAP SE, 1.120.05-8c23c50e159e9883edab0e2ebdd4e02c5919cd08
java -jar ngdbc.jar -u DBADMIN,PASSWORD -n BIG-IDENTIFIER.hana.trial-us10.hanacloud.ondemand.com:443 -d test -o encrypt=true -o validatecertificate=false
(the certificate is imported, and the error occurs even without the parameters encrypt and validatecertificate)
Contents of trace log:
ClassLoader: jdk.internal.loader.ClassLoaders$AppClassLoader#55054057
Process-ID: 320850
package package com.sap.db.jdbc, Java Platform API Specification, version 1.4, SAP HANA JDBC Driver, SAP SE, 1.120.05-8c23c50e159e9883edab0e2ebdd4e02c5919cd08 on Java 11.0.15
---- Thread 1eb44e46 main Timestamp: 2022-09-29 09:50:03.162
new Connection 'jdbc:sap://BIG-IDENTIFIER.hana.trial-us10.hanacloud.ondemand.com:443'
password=****
databaseName=test
host=BIG-IDENTIFIER....
options=
cmd=Select top 1 42 as "connect test" fro...
user=DBADMIN
HOSTLIST: [BIG-IDENTIFIER.hana.trial-us10.hanacloud.ondemand.com:443,]
new RTEException: -708 Receive of connect failed.
whereAmIjava.lang.Throwable
at com.sap.db.util.Tracer.whereAmI(Tracer.java:280)
at com.sap.db.rte.comm.RTEException.(RTEException.java:51)
at com.sap.db.rte.comm.BasicSocketComm.receiveInfoRequest(BasicSocketComm.java:587)
at com.sap.db.rte.comm.BasicSocketComm.doInfoRequest(BasicSocketComm.java:84)
at com.sap.db.rte.comm.BasicSocketComm.connectDB(BasicSocketComm.java:187)
at com.sap.db.rte.comm.SocketComm$1.open(SocketComm.java:47)
at com.sap.db.jdbc.topology.Topology.getSession(Topology.java:88)
at com.sap.db.jdbc.Driver.openByURL(Driver.java:1216)
at com.sap.db.jdbc.Driver.connect(Driver.java:313)
at com.sap.db.jdbc.Driver.main(Driver.java:858)
using null
=> FAILED
any thoughts on the why and how to solve it?
Since this is HANA Cloud Trial, I don't think, that you should use -d to specify a tenant database name. Please try to remove this parameter as you may be trying to connect to a non-existing tenant database. You can also refer to this tutorial.
Second thing to consider is the allowlist for IP addresses. Per default connections are blocked if you have not added your client ip to the allowlist. More details can be found in this blog.
Last but not least, please check if your are using the latest version of ngdb.jar. It can be obtained on this site.

debezium content based routing configuration

I'm using confluent so I've installed dibezium connectors according to confluent docs using confluent-hub in connect.properties I do have entry
plugin.path=/usr/share/java,/opt/confluent-6.0.0/share/confluent-hub-components
I need to use io.debezium.transforms.ContentBasedRouter https://debezium.io/documentation/reference/1.3/configuration/content-based-routing.html
so according to debezium doc I've downloaded debezium-scripting-1.3.1.Final.jar
and put it into
/opt/confluent-6.0.0/share/confluent-hub-components/ and copied it into
/opt/confluent-6.0.0/share/confluent-hub-components/debezium-debezium-connector-sqlserver/lib directories
here the entries in my mysql_src.json connector
"transforms": "unwrap,route",
"transforms.unwrap.type": "io.debezium.transforms.ExtractNewRecordState",
"transforms.unwrap.add.fields": "source.snapshot",
"transforms.route.type": "io.debezium.transforms.ContentBasedRouter",
"transforms.route.language": "jsr223.groovy",
"transforms.route.topic.expression": "value.__source_snapshot == 'false' ? 'test'"
when I'm trying to configure/load this connector I'm getting following error message
[2020-12-15 22:18:45,351] ERROR [Worker clientId=connect-1, groupId=connect-cluster] Failed to reconfigure connector's tasks, retrying after backoff: (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1369)
java.lang.NoClassDefFoundError: io/debezium/DebeziumException
Any suggestions how to fix this problem ?
According the docs, you need to additionally obtain a JSR-223 script engine implementation and add its contents to the Debezium plug-in directories of your Kafka Connect environment, since:
Debezium does not come with any implementations of the JSR 223 API. To use an expression language with Debezium, you must download the JSR 223 script engine implementation for the language, and add to your Debezium connector plug-in directories, along any other JAR files used by the language implementation.
I am not sure that configuration is correct but I passed first configuration problem (I hope) I'm facing another problem now which I will describe in different question.
I am not sure what was wrong, I did following
Clean up zookeeper directories
Clean up kafka directories
Run kafka in distributed mode using command line start/stop scripts (not using confluent cli)
this solved java.lang.NoClassDefFoundError: io/debezium/DebeziumException
error

Getting unable to open input stream error when trying to use Netty in IBM WAS server

Am trying to use netty jars, as part of pushy library(https://github.com/relayrides/pushy) to send apple push notifications. It runs fine in my local tomcat. When I try to deploy the same on IBM WAS and start my server, it gives me the below exception.
com.ibm.ws.ecs.internal.scan.context.impl.ScannerContextImpl scanJAR unable to open input stream for resource io/netty/util/internal/shaded/org/jctools/queues/package-info.class in archive WEB-INF/lib/netty-common-4.1.16.Final.jar
Pls find below the versions am using.
WAS - 8.5.5
Java - 1.7
Netty - 4.1.16
Pls help me understand the cause for the issue and how to fix the same. Thanks.
Looking at "netty-common-4.1.16.Final.jar" from
https://mvnrepository.com/artifact/io.netty/netty-common/4.1.16.Final
I'm seeing this resource:
netty-common-4.1.16.Final.jar/io/netty/util/internal/shaded/org/jctools/queues/package-info.class
That has been compiled using a java8 compiler. That seems incorrect -- none of the other class resources of the JAR are compiled using java8.
Data (internal reporting format) for "package-info":
/netty-common-4.1.16.Final.jar/io/netty/util/internal/shaded/org/jctools/queues/package-info.class
interface synthetic io.netty.util.internal.shaded.org.jctools.queues.package-info
extends java.lang.Object
Version [ 0x34 0x00 ] ( J2SE 8 )
The same data, for example, for "Log4JLoggerFactory":
/netty-common-4.1.16.Final.jar/io/netty/util/internal/logging/Log4JLoggerFactory.class
public io.netty.util.internal.logging.Log4JLoggerFactory
extends io.netty.util.internal.logging.InternalLoggerFactory
Version [ 0x32 0x00 ] ( J2SE 6.0 )
[F] public static final INSTANCE : [ Lio/netty/util/internal/logging/InternalLoggerFactory; ]
[M] public deprecated <init> : [ ()V ] ( void )
#java.lang.Deprecated
[M] public newInstance : [ (Ljava/lang/String;)Lio/netty/util/internal/logging/InternalLogger; ]
[M] static <clinit> : [ ()V ] ( void )
Can you try rerunning with that one resource removed? (Or, rebuilt to not use java8?) IBM WebSphere won't process java8 classes unless at a high enough service level (8.5.5.9 and higher). Or, try on a higher service level of WebSphere.

Spring XD 1.1.0 - JDBC Source connection issues

I have installed SPRING-XD version 1.1.0 on a Centos machine. Using xd-singlenode I want to connect it to a SQL Server database via jdbc source and put the data into file.
I created some streams as follows:
1)xd:>stream create connectiontest --definition "jdbc --url=jdbc:sqlserver://sqlserverhost:1433/SampleDatabase --username=sample --password=***** --query= 'SELECT * FROM schema.tablename' |file" --deploy
2)xd:>stream create connectiontest --definition "jdbc --connectionProperties=jdbc:sqlserver://sqlserverhost:1433/SampleDatabase --username=sample --password=***** --initSQL= 'SELECT * FROM schema.tablename' |file" --deploy
Everytime I deploy the stream it gives the following error:
Command failed org.springframework.xd.rest.client.impl.SpringXDException: Multiple top level module resources found :file [/opt/pivotal/spring-xd-1.1.0.RELEASE/xd/config/jms-hornetq.properties],file [/opt/pivotal/spring-xd-1.1.0.RELEASE/xd/config/hadoop.properties],file [/opt/pivotal/spring-xd-1.1.0.RELEASE/xd/config/xd-admin-logger.properties],file [/opt/pivotal/spring-xd-1.1.0.RELEASE/xd/config/xd-singlenode-logger.properties],file [/opt/pivotal/spring-xd-1.1.0.RELEASE/xd/config/xd-container-logger.properties],file [/opt/pivotal/spring-xd-1.1.0.RELEASE/xd/config/jms-activemq.properties],file [/opt/pivotal/spring-xd-1.1.0.RELEASE/xd/config/httpSSL.properties]
Earlier I set springxd_home pointing to my springxd directory. After removing the path it is working fine now.
Thanks for the support.

Elasticsearch JDBC-River Mysql - No suitable driver found for jdbc

I am trying to import a mysql table from mysql server to elasticsearch on my MAC OSX Mavericks.
I have installed elasticsearch 1.3.1 with homebrew
Installed jdbc-river 1.3.0.4 with elasticsearch plugin --install
Installed jdk 1.7.0_67
Downloaded mysql-connector-java-5.1.28-bin.jar into
$ES_HOME/plugins/jdbc (I had to create the folders 'plugins' and
'jdbc' myself) and gave chmod 777 permission for the .jar file.
Then I ran ./bin/elasticsearch and called this command in postman in order to create a river:
PUT request.
URL: localhost:9200/_river/my_jdbc_river/_meta
Raw data:
{
"type" : "jdbc",
"jdbc" : {
"url" : "jdbc:mysql://localhost:3306/<databaseName>",
"user" : "<MysqlUserName>",
"password" : "<MysqlUserPass",
"sql" : "select * from <TableName>"
}
}
And I received the following error in the elasticsearch log in the terminal:
[2014-08-26 15:38:39,300][ERROR][org.xbib.elasticsearch.river.jdbc.strategy.simple.SimpleRiverSource] while opening read connection: jdbc:mysql://localhost:3306/xcollector No suitable driver found for jdbc:mysql://localhost:3306/xcollector
java.sql.SQLException: No suitable driver found for jdbc:mysql://localhost:3306/xcollector
at java.sql.DriverManager.getConnection(DriverManager.java:596)
at java.sql.DriverManager.getConnection(DriverManager.java:215)
at org.xbib.elasticsearch.river.jdbc.strategy.simple.SimpleRiverSource.getConnectionForReading(SimpleRiverSource.java:196)
at org.xbib.elasticsearch.river.jdbc.strategy.simple.SimpleRiverSource.execute(SimpleRiverSource.java:315)
at org.xbib.elasticsearch.river.jdbc.strategy.simple.SimpleRiverSource.fetch(SimpleRiverSource.java:287)
at org.xbib.elasticsearch.plugin.feeder.jdbc.JDBCFeeder.fetch(JDBCFeeder.java:335)
at org.xbib.elasticsearch.plugin.feeder.jdbc.JDBCFeeder.executeTask(JDBCFeeder.java:179)
at org.xbib.elasticsearch.plugin.feeder.AbstractFeeder.newRequest(AbstractFeeder.java:362)
at org.xbib.elasticsearch.plugin.feeder.AbstractFeeder.newRequest(AbstractFeeder.java:53)
at org.xbib.pipeline.AbstractPipeline.call(AbstractPipeline.java:87)
at org.xbib.pipeline.AbstractPipeline.call(AbstractPipeline.java:14)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
I have followed many posts in google to try and find the cause of the problem, does anyone have any idea what I am missing?
And if anyone knows if this procedure can be more automatic to deploy like having some kind of package manager (like npm for node.js).
Thanks in advance,
So the clue to your problem is that you had to create the folders "plugins" and "jdbc" in step 4. Both of those folders are created when you install the mysql river plugin. I can see that the mysql river plugin installed correctly in your error message - it's running but it is unable to find the jdbc driver.
Look on your drive for the correct folder - $ES_HOME should have the following folders in it:
bin
config
data
lib
logs
plugins
If it does not then $ES_HOME is set incorrectly. Copy your jdbc driver as directed into the correct folder and you should be able to resolve this problem.

Resources