Log Oracle SQL statements with Squeryl and Play 2 - oracle

I am trying to log the SQL produced by Squeryl in a Play 2 application, for debugging purposes. I am using this with the following Oracle logging properties:
.level=SEVERE
oracle.jdbc.level=FINE
oracle.jdbc.handlers=java.util.logging.ConsoleHandler
java.util.logging.ConsoleHandler.level=ALL
java.util.logging.ConsoleHandler.formatter=java.util.logging.SimpleFormatter
oracle.net.ns.level=FINEST
oracle.net.ns.handlers=java.util.logging.ConsoleHandler
This has worked for me before in a non-Play application with the same Oracle driver jar, but in a Play application, the JUL-to-SLF4J bridge seems to be causing a problem:
Oops, cannot start the server.
Configuration error: Configuration error[Cannot connect to database [default]]
at play.api.Configuration$.play$api$Configuration$$configError(Configuration.scala:92)
at play.api.Configuration.reportError(Configuration.scala:570)
at play.api.db.BoneCPPlugin$$anonfun$onStart$1.apply(DB.scala:252)
at play.api.db.BoneCPPlugin$$anonfun$onStart$1.apply(DB.scala:243)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.immutable.List.foreach(List.scala:318)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractTraversable.map(Traversable.scala:105)
at play.api.db.BoneCPPlugin.onStart(DB.scala:243)
at play.api.Play$$anonfun$start$1$$anonfun$apply$mcV$sp$1.apply(Play.scala:88)
at play.api.Play$$anonfun$start$1$$anonfun$apply$mcV$sp$1.apply(Play.scala:88)
at scala.collection.immutable.List.foreach(List.scala:318)
at play.api.Play$$anonfun$start$1.apply$mcV$sp(Play.scala:88)
at play.api.Play$$anonfun$start$1.apply(Play.scala:88)
at play.api.Play$$anonfun$start$1.apply(Play.scala:88)
at play.utils.Threads$.withContextClassLoader(Threads.scala:18)
at play.api.Play$.start(Play.scala:87)
at play.core.StaticApplication.<init>(ApplicationProvider.scala:52)
at play.core.server.NettyServer$.createServer(NettyServer.scala:243)
at play.core.server.NettyServer$$anonfun$main$3.apply(NettyServer.scala:279)
at play.core.server.NettyServer$$anonfun$main$3.apply(NettyServer.scala:274)
at scala.Option.map(Option.scala:145)
at play.core.server.NettyServer$.main(NettyServer.scala:274)
at play.core.server.NettyServer.main(NettyServer.scala)
Caused by: java.lang.IllegalArgumentException: can't parse argument number 18=false
at java.text.MessageFormat.makeFormat(MessageFormat.java:1339)
at java.text.MessageFormat.applyPattern(MessageFormat.java:458)
at java.text.MessageFormat.<init>(MessageFormat.java:350)
at java.text.MessageFormat.format(MessageFormat.java:811)
at org.slf4j.bridge.SLF4JBridgeHandler.getMessageI18N(SLF4JBridgeHandler.java:268)
at org.slf4j.bridge.SLF4JBridgeHandler.callLocationAwareLogger(SLF4JBridgeHandler.java:223)
at org.slf4j.bridge.SLF4JBridgeHandler.publish(SLF4JBridgeHandler.java:301)
at java.util.logging.Logger.log(Logger.java:481)
at java.util.logging.Logger.doLog(Logger.java:503)
at java.util.logging.Logger.log(Logger.java:547)
at oracle.net.ns.NSProtocol.establishConnection(NSProtocol.java:919)
at oracle.net.ns.NSProtocol.connect(NSProtocol.java:267)
at oracle.jdbc.driver.T4CConnection.connect(T4CConnection.java:1625)
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:365)
at oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnection.java:557)
at oracle.jdbc.driver.T4CConnection.<init>(T4CConnection.java:233)
at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:29)
at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:556)
at java.sql.DriverManager.getConnection(DriverManager.java:582)
at java.sql.DriverManager.getConnection(DriverManager.java:185)
at com.jolbox.bonecp.BoneCP.obtainRawInternalConnection(BoneCP.java:351)
at com.jolbox.bonecp.BoneCP.<init>(BoneCP.java:416)
at com.jolbox.bonecp.BoneCPDataSource.getConnection(BoneCPDataSource.java:120)
at play.api.db.BoneCPPlugin$$anonfun$onStart$1.apply(DB.scala:245)
... 22 more
I tried simply removing the JUL to SLF4J bridge jar from my deployed application, but Play refuses to start if that jar isn't present, so that didn't work.
I obviously don't need to use this particular approach, I just want some way to log the SQL selects being executed (preferably without admin access to the Oracle server).

I just needed to change the oracle.net.ns.level to SEVERE. Logging of oracle.net is only needed if you want to log the network packets being sent to and from the server, which I didn't need in this case.

Related

Setup multiple kafka connect sinks

I am working on streaming the data from postgreSQL to HDFS. I had setup confluent environment on HDP 2.6 sandbox. My jdbc source configs for postgreSQL are
name=jdbc_1
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max=1
connection.url=jdbc:postgresql://host:port/db?currentSchema=schema&user=user&password=password
mode=timestamp
timestamp.column.name=col1
validate.non.null=false
topic.prefix=psql-
All other properties for connection are also fine and i am running it by
./bin/connect-standalone ./etc/kafka/connect-standalone.properties ./etc/kafka-connect-jdbc/source.properties
Its working fine and creating topics based on the number of tables in the database as
psql-table1
psql-table2
Now i want to run HDFS sinks on all the topics to create separate dir for every table in the postgreSQL database.
But when i run HDFS sink with command
./bin/connect-standalone ./etc/kafka/connect-standalone.properties ./etc/kafka-connect-hdfs/hdfs-postGres.properties
by running the source i am getting error
ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:113)
org.apache.kafka.connect.errors.ConnectException: Unable to start REST server
at org.apache.kafka.connect.runtime.rest.RestServer.start(RestServer.java:214)
at org.apache.kafka.connect.runtime.Connect.start(Connect.java:53)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:95)
Caused by: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:331)
at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:299)
at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80)
at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:235)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.server.Server.doStart(Server.java:398)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.apache.kafka.connect.runtime.rest.RestServer.start(RestServer.java:212)
... 2 more
and if i stop the source connection and start the sink it works fine.
Anyone can help me that how i can setup multiple sink connectors.
Kafka Connect starts a rest server on port 8083.
If you run more that one standalone connector on a single machine, you need to change it with the rest.port property
Or you can run connect-distributed, then POST your source and sink configurations individually as JSON payloads running on a single Connect server, then you wouldn't have this Address already in use issue.

Apache Drill (Embedded): Failure setting up ZK for client

I am new to Apache Drill, and currently I am following the instructions from this link here to learn about it:
Drill in 10 minutes
However, after checking that I had the pre-requisites, I hit an error when I execute the steps in 'Start Drill on Windows' section.
Open Command Prompt.
Open the apache-drill- folder.
Go to the bin directory. For example: cd bin
Type the following command on the command line: sqlline.bat -u "jdbc:drill:zk=local"
Error: Failure in connecting to Drill:
org.apache.drill.exec.rpc.RpcException: Failure setting up ZK for
client. (state= ,code=0) java.sql.SQLException: Failure in connecting
to Drill: org.apache.drill.exec.rpc.RpcException: Failure setting up
ZK for client.
at org.apache.drill.jdbc.impl.DrillConnectionImpl.(DrillConnectionImpl.java:167)
at org.apache.drill.jdbc.impl.DrillJdbc41Factory.newDrillConnection(DrillJdbc41Factory.java:72)
at org.apache.drill.jdbc.impl.DrillFactory.newConnection(DrillFactory.java:69)
at org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:143)
at org.apache.drill.jdbc.Driver.connect(Driver.java:72)
at sqlline.DatabaseConnection.connect(DatabaseConnection.java:167)
at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:213)
at sqlline.Commands.connect(Commands.java:1083)
at sqlline.Commands.connect(Commands.java:1015)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
at sqlline.SqlLine.dispatch(SqlLine.java:742)
at sqlline.SqlLine.initArgs(SqlLine.java:528)
at sqlline.SqlLine.begin(SqlLine.java:596)
at sqlline.SqlLine.start(SqlLine.java:375)
at sqlline.SqlLine.main(SqlLine.java:268)
Caused by: org.apache.drill.exec.rpc.RpcException: Failure setting up ZK for
client.
at org.apache.drill.exec.client.DrillClient.connect(DrillClient.java:329)
at org.apache.drill.jdbc.impl.DrillConnectionImpl.(DrillConnectionImpl.java:158)
... 18 more
Caused by: java.io.IOException: Failure to connect to the zookeeper cluster service within the allotted time of 10000 mi
lliseconds.
at org.apache.drill.exec.coord.zk.ZKClusterCoordinator.start(ZKClusterCoordinator.java:123)
at org.apache.drill.exec.client.DrillClient.connect(DrillClient.java:327)
... 19 more
local (The system cannot find the file specified)
apache drill 1.11.0
Where is the 'local' file, and where can I get it?
Try drill bit in the command instead of zk because zookeeper has nothing to do if you are using the drill in embedded mode
"jdbc:drill:drillbit=local"
I had this issue, but was using Powershell, instead of command prompt.
Try running cmd /r 'sqlline.bat -u "jdbc:drill:zk=local"'

sonarqube exception caught on transport layer

Good afternoon everyone, the problem is this I have a server with SonarQube, that when I try to start the windows service, it gets up but then it stops.
The following error appears in the sonarqube log:
2017.11.14 11:04:52 WARN sea[o.e.transport.netty] [sonar-1510653879773] exception caught on transport layer [[id: 0x346b46fb, /127.0.0.1:59330 => /127.0.0.1:9001]], closing connection
java.io.IOException: An existing connection was forcibly closed by the remote host
at sun.nio.ch.SocketDispatcher.read0(Native Method) ~[na:1.8.0_152]
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43) ~[na:1.8.0_152]
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) ~[na:1.8.0_152]
at sun.nio.ch.IOUtil.read(IOUtil.java:192) ~[na:1.8.0_152]
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) ~[na:1.8.0_152]
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:64) [elasticsearch-1.1.2.jar:na]
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) [elasticsearch-1.1.2.jar:na]
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318) [elasticsearch-1.1.2.jar:na]
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) [elasticsearch-1.1.2.jar:na]
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) [elasticsearch-1.1.2.jar:na]
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) [elasticsearch-1.1.2.jar:na]
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) [elasticsearch-1.1.2.jar:na]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_152]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_152]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_152]
2017.11.14 11:04:52 INFO app[o.s.p.m.TerminatorThread] Process[search] is stopping
2017.11.14 11:04:52 INFO sea[o.s.p.StopWatcher] Stopping process
Do you know why this error?
I have set the sonar.properties correctly, including set the value of the sonar.search.port property to 0 as this link suggests: Sonar launch error, but the problem persists.
I hope you can give me a hand...
Regards!!!
UnComment below line in sonar property file and change port 9001 to 0
#sonar.search.port=9001
sonar.search.port=0
I had the same problem and i could fix it like this:
Go to this folder: sonarqube-x.x\conf
Open this file: sonar.properties
Find the word: #sonar.web.port
Change the value from 9000 to another port, like 9002
Save your changes
Start again your sonarqube
Access to the server with port 9000: http://localhost:9000
The reason could be the port number of sonarQube OR the one of elasticSearch instance used by sonarQube (I had a similar problem before), so the step to change both/one of those ports are :
Go to this folder: sonarqube-x.x\conf
Open this file: sonar.properties
For sonarQube port:
Find: #sonar.web.port
Change the value from 9000 to another port, like 9123; and un-comment the line (remove # in the beginning) sonar.web.port=9123
For sonarQube's elasticSearch instace port:
Find: #sonar.search.port
change this line To sonar.search.port=0 (this means that he will search for any available port and bind it)
Save your changes
Start again your sonarqube
Access to the server with the new specified sonarQube-port: http://localhost:9123
I experienced this error when upgrading SonarQube from version 5.6.7 to 6.7.1.
Originally I thought this was due to the port number but upon checking the web.log I noticed that there was an error relating to the LDAP plugin (2.2.0.608).
ERROR web[][o.s.s.p.Platform] Background initialization failed. Stopping SonarQube org.sonar.plugins.ldap.LdapException: The property 'ldap.url' is empty and no realm configured to try auto-discovery.
Updating the sonar.properties file with the correct configuration allowed SonarQube to start.
I just occurred an exactly same question as you did.
I started SonarQube with MariaDB 5.5, but I found some error messages in sonarqube-x.x/logs/web.log:
2021.01.21 14:36:17 INFO web[][o.s.p.ProcessEntryPoint] Starting web
......
2021.01.21 14:36:19 ERROR web[][o.s.s.p.Platform] Web server startup failed: Unsupported mysql version: 5.5. Minimal supported version is 5.6.
So I changed my database to MySQL 5.7 and it started successfully.
Not quite sure you had the same problem, but just check these log files and see what actually happened during starting.

Zabbix JMX Tomcat8 monitoring fails

I'm trying to monitor Tomcat8 with JDK8 using JMX.
I have setup my agents and modified the startup.sh.
On my zabbix_java_gateway.log I get the following exception:
WARN com.zabbix.gateway.SocketProcessor - error processing request
com.zabbix.gateway.ZabbixException: java.net.SocketTimeoutException:
connection timed out:
service:jmx:rmi:///jndi/rmi://server1.example.com:10052/jmxrmi
at com.zabbix.gateway.JMXItemChecker.getValues(JMXItemChecker.java:97)
~[zabbix-java-gateway-2.4.7.jar:na]
at com.zabbix.gateway.SocketProcessor.run(SocketProcessor.java:63)
~[zabbix-java-gateway-2.4.7.jar:na]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[na:1.8.0_71]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[na:1.8.0_71]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_71] Caused by: java.net.SocketTimeoutException: connection timed out:
service:jmx:rmi:///jndi/rmi://server1.example.com:10052/jmxrmi
at com.zabbix.gateway.ZabbixJMXConnectorFactory.connect(ZabbixJMXConnectorFactory.java:123)
~[zabbix-java-gateway-2.4.7.jar:na]
at com.zabbix.gateway.JMXItemChecker.getValues(JMXItemChecker.java:89)
~[zabbix-java-gateway-2.4.7.jar:na]
... 4 common frames omitted
On my startup.sh I added the following to the CATALINA_OPTS
-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=10052 -Dcom.sun.management.jmxremote.authenticate=true -Dcom.sun.management.jmxremote.password.file=/opt/tomcat-latest/conf/jmxremote.password
-Dcom.sun.management.jmxremote.access.file=/opt/tomcat-latest/conf/jmxremote.access
-Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=server1.example.com
My zabbix_agentd.conf contains the following:
PidFile=/tmp/zabbix_agentd.pid LogFile=/var/log/zabbix_agentd.log
LogFileSize=1 DebugLevel=3 Server=monitor.example.com
Hostname=server1.example.com ListenPort=10050 StartAgents=5 Timeout=30
I have already done the following:
successfully connected to the server using jconsole
remove authentication
telnet the server over port 10050 / 10052
The weird part is that the same setup works well for Tomcat6 with JDK7.
EDIT 1
I've updated the JDK version on the zabbix server to be newer than the JDK installed on my JAVA nodes - still same result - it ends with
ZBX_TCP_READ() failed: [4] Interrupted system call
UPDATE
So I figured it out eventually.
I had on my tomcat configuration file -Djava.rmi.server.hostname=server1.example.com
I miss understood that the hostname should be set to the monitoring server and to the monitored server hostname.
Apparently, there's a bug on Tomcat 6 and this directive does not work.
Remove it solved the problem completely.
Thanks,
Liron

WSO2 DAS 3.0.0 with API Manager 1.9.0 not working

I am using trying to use DAS 3.0.0 as replacement of BAM with WSO2 API Manager 1.9.0/1.9.1 with Oracle for WSO2AM_STATS_DB.
I am following http://blog.rukspot.com/2015/09/publishing-apim-runtime-statistics-to.html
I can see data in DAS's carbon dashboard in Data Explorer tables ORG_WSO2_APIMGT_STATISTICS_REQUEST and ORG_WSO2_APIMGT_STATISTICS_RESPONSE.
But data is not stored in Oracle. Therefore I am not able to see Statistics in publisher of AM. It keeps saying "Data publishing is enabled. Generate some traffic to see statistics."
I am getting following error in log:
[2015-12-08 13:00:00,022] INFO {org.wso2.carbon.analytics.spark.core.AnalyticsT
ask} - Executing the schedule task for: APIM_STAT_script for tenant id: -1234
[2015-12-08 13:00:00,037] INFO {org.wso2.carbon.analytics.spark.core.AnalyticsT
ask} - Executing the schedule task for: Throttle_script for tenant id: -1234
Exception in thread "dag-scheduler-event-loop" java.lang.NoClassDefFoundError: o
rg/xerial/snappy/SnappyInputStream
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:274)
at org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.sc
ala:66)
at org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.sc
ala:60)
at org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcas
t$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
at org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.s
cala:80)
at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(Torre
ntBroadcastFactory.scala:34)
at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastMan
ager.scala:62)
at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1291)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DA
GScheduler$$submitMissingTasks(DAGScheduler.scala:874)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DA
GScheduler$$submitStage(DAGScheduler.scala:815)
at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGSchedul
er.scala:799)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAG
Scheduler.scala:1426)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAG
Scheduler.scala:1418)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
Caused by: java.lang.ClassNotFoundException: org.xerial.snappy.SnappyInputStream
cannot be found by spark-core_2.10_1.4.1.wso2v1
at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(Bundl
eLoader.java:501)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.
java:421)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.
java:412)
at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(De
faultClassLoader.java:107)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 15 more
Am I missing something?
Can anyone please help me to figure out this issue?
Thanks in advance.
Move all the libraries(jars) into your project's /WEB-INF/lib. Now all the libraries/jars under /WEB-INF/lib will come under classpath.
use snappy-java jar file and it will work as you want.

Resources