Opening multiple H2 Consoles for Corda Nodes - h2

I would like to be able to query the h2 in memory database for each Corda node concurrently, but am unable to do so currently. Does anybody have a workaround?

Like Alessandro described, you're going to want to follow this guide (https://docs.corda.net/docs/corda-os/4.7/node-database-access-h2.html) to connect to your nodes.
It shouldn't make a difference how many connections you make to it or which nodes. The H2 db should work like you expect.

Related

Does Jedis need to know all the cluster IPs to function or can it work with just one and let it figure out the others automatically?

I'm trying to connect to both a single node Redis and Elasticache cluster with the same code base. I am using Jedis raw not via Spring since this is a legacy app and I am setting up Tomcat to use Jedis as a session store.
Does Jedis need to know all the cluster IPs to function or can it work with just one and let it figure out the others automatically?
It can work with just one active node and then figuring out the other nodes internally.

Can I use a SnappyData JDBC connection with only a Locator and Server nodes?

SnappyData documentation and architecture diagrams seem to indicate that a JDBC thin client connection goes from a client to a Locator and then it is routed to a direct connection to a Server.
If this is true, then I can run JDBC queries without a Lead node, correct?
Yes, that is correct. The locator provides load and connectivity information back to the client that is now able to connect to one or more servers either for direct access to a bucket for low latency queries but more importantly, is HA - can failover and failback.
So, yes, your connected clients will continue to function even when the locator goes away. Note that the "lead" plays a different role than the locator. Its primary function is to host Spark driver, orchestrate Spark Jobs and provide HA to Spark. With no lead, you won't be able to run such Jobs.
In addition to what #jagsr has mentioned, if you do not intend to run the lead nodes (and thus no Spark jobs or column store), then you can run the cluster as pure row store using snappy-start-all.sh rowstore (see rowstore docs)

How to configure parse server to use an n-node mongo setup

Migrating to mongo is well documented but I could not find a reference/guidelines for configuring the server to work with an n-node mongo cluster.
Mlabs suggest that if using anything other than single node (aka sandbox) users should run tests to cover primary node failure.
Has anyone configured parse server on let's say a 3-node mongo? How?
Alternatively, what volume of users/requests should prompt an n-node mongo cluster set up?
use the uri you're given by the likes of mlab (mongo labs) - parse server will sort it out...

Stored proc behaving differently on two nodes

I have two oracle nodes running on RAC. Using TOAD I compiled a stored proc. My Java Application runs on Jboss and use connection pool to the oracle server. In one node I still see the old query running while the other node behave fine. How is this possible? Any idea?
Thanks

elasticsearch snapshot fails when nodes in a cluster run in different machines

we are using elasticsearch 1.1.1
We have a cluster with 3 nodes and all three nodes are in 3 different machines.
Accessing the cluster, performing index operations work fine.
But when we use snapshot feature to take the backup, it (backup) getting failed.
but if we have all three nodes on the same machine, the snapshot command works fine.
Did anybody face this issue.
I did not include the configuration details here, as the cluster and indexing operations work fine without any issues.
Thanks in advance.
For those who are looking for a solution, it is a requirement that we should use the NFS

Resources