I have two oracle nodes running on RAC. Using TOAD I compiled a stored proc. My Java Application runs on Jboss and use connection pool to the oracle server. In one node I still see the old query running while the other node behave fine. How is this possible? Any idea?
Thanks
Related
I have two virtual servers and I installed Oracle 19c on only one server, and I need to install another Oracle database on the second server and I need to make clustering in the database between the two servers. How to do this? Is this available using Windows Cluster?
You can not use windows cluster to deploy oracle RAC. You should oracle's own software (Oracle Cluster ware) to do it.
2.To deploy oracle RAC:
a.If you installed database as a single instance,at first you should convert it to RAC and then through oracle addnode procedure, add the second node to cluster.
b.If your installation is a RAC you should do the prerequisite on the second node and using oracle addnode script, add the second node. in recent versions of oracle addnode has a graphical interface also.
I would like to be able to query the h2 in memory database for each Corda node concurrently, but am unable to do so currently. Does anybody have a workaround?
Like Alessandro described, you're going to want to follow this guide (https://docs.corda.net/docs/corda-os/4.7/node-database-access-h2.html) to connect to your nodes.
It shouldn't make a difference how many connections you make to it or which nodes. The H2 db should work like you expect.
When a connection to Oracle server is made from Spark cluster, would the JDBC connection to Oracle server would be established from the node/box where the code is being executed or would it be executed from data nodes? In later case, whether the drivers need to be installed on all data nodes for it to be connecting to Oracle server.
When a connection to Oracle server is made from Spark cluster, would the JDBC connection to Oracle server would be established from the node/box where the code is being executed or would it be executed from data nodes?
Data is always loaded from the executor nodes. However driver node needs an access to the database as well, to be able fetch metadata.
In later case, whether the drivers need to be installed on all data nodes for it to be connecting to Oracle server.
Yes. Driver has to be present on each node used by the Spark application. This can done by:
Having required jars on the classpath of each node.
Using spark.jars to distribute jars on the runtime
Using spark.jars.packages to fetch jars using Maven coordinates.
SnappyData documentation and architecture diagrams seem to indicate that a JDBC thin client connection goes from a client to a Locator and then it is routed to a direct connection to a Server.
If this is true, then I can run JDBC queries without a Lead node, correct?
Yes, that is correct. The locator provides load and connectivity information back to the client that is now able to connect to one or more servers either for direct access to a bucket for low latency queries but more importantly, is HA - can failover and failback.
So, yes, your connected clients will continue to function even when the locator goes away. Note that the "lead" plays a different role than the locator. Its primary function is to host Spark driver, orchestrate Spark Jobs and provide HA to Spark. With no lead, you won't be able to run such Jobs.
In addition to what #jagsr has mentioned, if you do not intend to run the lead nodes (and thus no Spark jobs or column store), then you can run the cluster as pure row store using snappy-start-all.sh rowstore (see rowstore docs)
I have a scenario where I have copy data from hive to db2. There are two ways I can implement this. One is using sqoop export command and another is db2 load client. I need to know which is best approach with respect to performance. Please give me suggestion.
Sqoop can be used to transfer large sized data file in HDFS concurrently (using mappers) to db2. I have no idea about db2 load client.
Depends.. If using DB2 LUW, with the sqoop connector it can be faster depending on how many clusters you have available (mappers). DB2 Load (at least in the z world) can do parrallel loading so depending on how many cp's on the database system, that could be faster. So I guess it depends on your environment (the database system vs the hadoop cluster).