SymmetricDS - Oracle - Specify Schema and disable Push - oracle

I’m neewbie with SymmetricDS.
I have some questions about Oracle replication :
Can I specify which schema i want to synchronize ?
Modification are only done on the master. Can I disable push synchronization (i.e only syncing from Master to Nodes) ?
Thanks a lot for your Help
i've deployed two servers RHEL with Oracle19c
X.X.X.X is the IP adress of the "centralNode" server
Y.Y.Y.Y is the IP adress of the "clientNode" server
ACF and ADM are schema that i want to synchronize
I would like synchronize two schemas only in Pull Mode (i.e only syncing from Master to Nodes)
Here my server.properties on the "centralNode" server
#Friendly name to refer to this node from command line
engine.name=centralNode
#The class name for the JDBC Driver
db.driver=oracle.jdbc.driver.OracleDriver
#db.driver=org.h2.Driver
#The JDBC URL used to connect to the database
db.url=jdbc:oracle:thin:#X.X.X.X:1521:MTL
#The database user that SymmetricDS should use.
db.user=<>
#The database password
db.password=<>
#This node will contact the root node's sync.url to register itself.
#The registration.url should be empty for the master node.
#For client nodes, set the registration.url to be the master's sync.url.
registration.url=
#Synchronization URL with Master Server IP and Synchro engine name. sync.url=http://X.X.X.X:31415/sync/centralNode
#Node group this node belongs to, which defines what it will sync with who.
#Must match the sym_node_group configuration in database.
group.id=central
#External ID for this node, which is any unique identifier you want to use.
external.id=000
auto.registration=true
auto.reload=true
Here my client.properties on the "clientNode" server
#Friendly name to refer to this node from command line
engine.name=clientNode
#The class name for the JDBC Driver
db.driver=oracle.jdbc.driver.OracleDriver
#db.driver=org.h2.Driver
#The JDBC URL used to connect to the database
db.url=jdbc:oracle:thin:#Y.Y.Y.Y:1521:SEL
#The database user that SymmetricDS should use.
db.user=<>
#The database password
db.password=<>
#This node will contact the root node's sync.url to register itself.
#The registration.url should be empty for the master node.
#For client nodes, set the registration.url to be the master's sync.url.
registration.url=http://X.X.X.X:31415/sync/centralNode
#Synchronization URL with Client Server IP and Synchro engine name
sync.url=http://Y.Y.Y.Y:31415/sync/clientNode
#Node group this node belongs to, which defines what it will sync with who.
#Must match the sym_node_group configuration in database.
group.id=client
#External ID for this node, which is any unique identifier you want to use.
external.id=001
i've installed the Syn table from the "centralNode" server with the following command :
bin/symadmin --engine centralNode create-sym-tables
Tables are OK in the Oracle Database in the "centralNode" server
I've started Symmetric service on the "centralNode" server
bin/sym_service start
I've started Symmetric service on the "centralClient" server
bin/sym_service start
in the log of the "centralNode" server
2022-09-19 14:48:27,939 WARN [centralNode] [RegistrationUriHandler] [qtp361268035-14] client:001:? was not allowed to register.
2022-09-19 14:48:27,941 WARN [clientNode] [RegistrationService] [clientNode-job-3] Waiting for registration to be accepted by the server. Registration is not open.
in the log of the "clientNode" server
2022-09-19 14:49:10,404 INFO [clientNode] [DataLoaderService] [clientNode-job-1] Using registration URL of http://X.X.X.X:31415/sync/centralNode/registration
2022-09-19 14:49:10,442 WARN [clientNode] [RegistrationService] [clientNode-job-1] Waiting for registration to be accepted by the server. Registration is not open.
I tried to open regisstration from he "centralNode" server
bin/symadmin open-registration --engine centralNode client 001
In the log of the "centralNode" server
Please add a group link where the source group id is central and the target group id is client
2022-09-19 14:51:37,869 WARN [centralNode] [RegistrationUriHandler] [qtp361268035-14] client:001:? was not allowed to register.
2022-09-19 14:51:54,912 WARN [centralNode] [RegistrationService] [qtp361268035-17] Cannot register a client node unless a node group link exists so the registering node can receive configuration updates. Please add a group link where the source group id is central and the target group id is client
I tried to update the node group link
insert into sym_node_group_link (source_node_group_id, target_node_group_id, data_event_action) values ('central', 'client', 'P');
In the log of the "centralNode" server
2022-09-19 14:54:16,211 WARN [centralNode] [RegistrationUriHandler] [qtp361268035-14] client:001:? was not allowed to register
2022-09-19 14:54:17,977 INFO [clientNode] [DataLoaderService] [clientNode-job-6] Using registration URL of http://X.X.X.X:31415/sync/centralNode/registration
2022-09-19 14:54:18,325 WARN [centralNode] [RegistrationService] [qtp361268035-17] Cannot register a client node unless a node group link exists so the registering node can receive configuration updates. Please add a group link where the source group id is central and the target group id is client
Maybe could you help me ?

Yes, there's a column source schema in the trigger column and target schema in the router table.
Yes, you can put the master in one group and nodes to another and then define triggers only for the master and routers that will route data only from master to nodes.

Related

GCP mongodb external ip connection issue

I have a spring MVC application and I am connecting it to MongoDB cluster
This is in the application.properties file
mongodb.url=mongodb://userName:Password#xx.xx.x.xx:27017,xx.xx.x.xx:27017,xx.xx.x.xx:27017/?authSource=admin
The cluster is deployed on GCP with one primary and 2 secondary servers.
However, after deployment when I hit the API to get the data I get an error
{java.net.UnknownHostException: mongodb-3-arbiters-vm-0}}, {address=mongodb-3-servers-vm-1:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketException: mongodb-3-servers-vm-1}, caused by {java.net.UnknownHostException: mongodb-3-servers-vm-1}}
The external IPs are getting mapped to the server name on the GCP dashboard. xx.xx.xx.xx:27017 to mongodb-3-servers-vm-1:27017, hence resulting in unknown host exception. what to do to avoid that ?
When connecting to a replica set, the hostnames, IP addresses and port numbers provided in the connection string are the seedlist.
The driver will connect to the hosts in the seedlist in order to get an initial connection. It uses this connection to perform server discovery. It queries the server that is connected first for the host names, port numbers, and status of the other members of the replica set. The server obtains this information from the replica set configuration document.
This means that the hostnames and port number you used when running rs.initiate or rs.add must be resolvable by both the replica set members and each client host that will be connecting.
There is a feature that supports passing remote clients a different host name, similar to split-horizon DNS, but outside of the git repository, I don't see any mention of it.

MQRC_UNKNOWN_ALIAS_BASE_Q when connecting with IBM MQ cluster using CCDT and Spring Boot JMSTemplate

I have a Spring Boot app using JMSListener + IBMConnectionFactory + CCDT for connecting an IBM MQ Cluster.
A set the following connection properties:
- url pointing to a generated ccdt file
- username (password not required, since test environment)
- queuemanager name is NOT defined - since it's the cluster's task to decide, and a few google results, including several stackoverflow ones indicate that in my case qmgr must be set to empty string.
When my Spring Boot JMSListener tries to connect to the queue, the following MQRC_UNKNOWN_ALIAS_BASE_Q error occurs:
2019-01-29 11:05:00.329 WARN [thread:DefaultMessageListenerContainer-44][class:org.springframework.jms.listener.DefaultMessageListenerContainer:892] - Setup of JMS message listener invoker failed for destination 'MY.Q.ALIAS' - trying to recover. Cause: JMSWMQ2008: Failed to open MQ queue 'MY.Q.ALIAS'.; nested exception is com.ibm.mq.MQException: JMSCMQ0001: IBM MQ call failed with compcode '2' ('MQCC_FAILED') reason '2082' ('MQRC_UNKNOWN_ALIAS_BASE_Q').
com.ibm.msg.client.jms.DetailedInvalidDestinationException: JMSWMQ2008: Failed to open MQ queue 'MY.Q.ALIAS'.
at com.ibm.msg.client.wmq.common.internal.Reason.reasonToException(Reason.java:513)
at com.ibm.msg.client.wmq.common.internal.Reason.createException(Reason.java:215)
In the MQ error log I see the following:
01/29/2019 03:08:05 PM - Process(27185.478) User(mqm) Program(amqrmppa)
Host(myhost) Installation(Installation1)
VRMF(9.0.0.5) QMgr(MyQMGR)
AMQ9999: Channel 'MyCHL' to host 'MyIP' ended abnormally.
EXPLANATION:
The channel program running under process ID 27185 for channel 'MyCHL'
ended abnormally. The host name is 'MyIP'; in some cases the host name
cannot be determined and so is shown as '????'.
ACTION:
Look at previous error messages for the channel program in the error logs to
determine the cause of the failure. Note that this message can be excluded
completely or suppressed by tuning the "ExcludeMessage" or "SuppressMessage"
attributes under the "QMErrorLog" stanza in qm.ini. Further information can be
found in the System Administration Guide.
----- amqrmrsa.c : 938 --------------------------------------------------------
01/29/2019 03:15:14 PM - Process(27185.498) User(mqm) Program(amqrmppa)
Host(myhost) Installation(Installation1)
VRMF(9.0.0.5) QMgr(MyQMGR)
AMQ9209: Connection to host 'MyIP' for channel 'MyCHL' closed.
EXPLANATION:
An error occurred receiving data from 'MyIP' over TCP/IP. The connection
to the remote host has unexpectedly terminated.
The channel name is 'MyCHL'; in some cases it cannot be determined and so
is shown as '????'.
ACTION:
Tell the systems administrator.
Since the MQ error log contains QMgr(MyQMGR), which MyQMGR value I did not set in the connection properties, I assume the routing seems to be fine: the MQ Cluster figured out a qmgr to use.
The alias exists and points to an existing q. Bot the target q and the alias are added to the cluster via the CLUSTER(clustname) command.
What can be wrong?
Short Answer
MQ Clustering is not used for a consumer application to find a queue to GET messages from.
MQ Clustering is used when a producer application PUTs messages to direct them to a destination.
Further reading
Clustering is used when messages are being sent to help provide load balancing to multiple instances of a clustered queue. In some cases people use this for hot/cold failover by having two instances of a queue and keeping only one PUT(ENABLED).
If an application is a producer that is putting messages to a clustered queue, it only needs to be connected to a queue manager in the cluster and have permissions to put to that clustered queue. MQ based on a number of different things will handle where to send that message.
Prior to v7.1 there was only two ways to provide access to remote clustered queues:
Using a QALIAS:
Define a local QALIAS which has a TARGET set to the clustered queue name
Note this QALIAS does not itself need to be clustered.
Grant permission to put to the local QALIAS.
Provide permissions to PUT to the SYSTEM.CLUSTER.TRANSMIT.QUEUE.
The first option allows for granting granular access to an application for specific clustered queues in the cluster. The second option allows for the application to put to any clustered queue in the cluster or any queue on any clustered queue manager in the cluster.
At 7.1 IBM added a new optional behavior, this was provided with the setting ClusterQueueAccessControl=RQMName in the Security stanza of the qm.ini. If this is enabled (it is not the default), then you can actually provide permission for the app to PUT to the remote clustered queues directly without the need for a local QALIAS.
What clustering is not for is consuming applications such as your example of a JMSListener.
An application that will consume from any QLOCAL (clustered or not) must be connected to the queue manager where the QLOCAL is defined.
If you have a situation where there are multiple instances of a clustered QLOCAL that are PUT(ENABLED), you would need to ensure you have consumers connected directly to each queue managers that an instance is hosted on.
Based on your comment you have a CCDT with an entry such as:
CHANNEL('MyCHL') CHLTYPE(CLNTCONN) QMNAME('MyQMGR') CONNAME('node1url(port1),node2url(port2)')
If there are two different queue managers with different queue manager names listening on node1url(port1) and node2url(port2), then you have different ways to accomplish this from the app side.
When you specify the QMNAME to connect to the app will expect the name to match the queue manager you connect to unless it meets one of the following:
If you specify *MyQMGR it will find the channel or channels with QMNAME('MyQMGR') and pick one and connect and will not enforce that the remote queue manager name must match.
If in your CCDT you have QNAME(''), it is set to NULL, then in your app you can specify a empty queue manager name or only a space and it will find this entry in the CCDT and will not enforce that the remote queue manager name must match.
In your app you specify the queue manager name as *, MQ will use any channel in the CCDT and will not enforce that the remote queue manager name must match.
One limitation of CCDT is that channel name must be unique in the CCDT. Even if the QMNAME is different you can't have a second entry with the same channel name.
When you connect you are hitting the entry with two CONNAME's and getting connected to the first IP(port), you would only get to the second IP(port) if at connect time the first is not available, MQ will try the second, or if you are connected and have RECONNECT enabled and then the first goes down MQ will try to connect to the first then second.
If you want to have both clustered queue PUT(ENABLED) to receive traffic then you want to be able to specifically connect to each of the two queue managers to read those queues.
I would suggest you add a new channel on each queue manager that has a different QM specific name that is also different from the existing name, something like this:
CHANNEL('MyCHL1') CHLTYPE(CLNTCONN) QMNAME('MyQMGR1') CONNAME('node1url(port1)')
CHANNEL('MyCHL2') CHLTYPE(CLNTCONN) QMNAME('MyQMGR2') CONNAME('node2url(port2)')
This would be in addition to the existing entry.
For your putting components you can continue to use the channel that can connect to either queue manager.
For your getting components you can configure at least two of them, one to connect to each queue manager using the new queue manager specific CCDT entries, this way both queues are being consumed.

Can I access to Nifi Rest-API using localhost instead of actual node-ip address in Nifi cluster?

For example; I have 3 nifi nodes in nifi cluster. Example hostnames of these nodes;
192.168.12.50:8080(primary)
192.168.54.60:8080
192.168.95.70:8080
I know that I can access to nifi-rest api from all nifi nodes. I have GetHTTP processor for get cluster summary from rest-api, and this processor runs on only pimary node. I did set "URL" property of this processor to 192.168.12.50:8080/nifi-api/controller/cluster.
But, if primary node is down, new primary node will be elected. Thus, I will not be able to access 192.168.12.50:8080 address from new primary node. Because this node was down. So, I will not be able to get cluster summary result from rest-api.
In this case, Can I use "localhost:8080/nifi-api/controller/cluster" instead of "192.168.12.50:8080/nifi-api/controller/cluster" for each node in nifi cluster?
It depends on a few things... if you are running securely then you have certificates that are generated for each node specific to the hostname, so the host in the web requests needs to match the host in the certificates, so you can't use localhost in that case.
It also depends how NiFi's web server is configured. If nifi.web.http.host or nifi.web.https.host has a specific hostname specified, then the web server is only bound to that hostname and may not accept connections with a different hostname. In a default unsecure setup, if you leave nifi.web.http.host blank then it binds to all interfaces.
You may be able to use the expression language function to obtain the hostname of the current node. So you could make the url something like "http://${hostname()}/nifi-api/controller/cluster".

Nifi - Remote Process Group - PeerSelector

I have build a simple Process Group. It generates a FlowFile with some random stuff in it and sends it to the Nifi Remote Process Group.
This Remote Process Group is configured to send the FlowFile to localhost or in this case to my own Hostname (I have tried localhost as well).
After this the FlowFile should Appear at the "From MiNiFi" input Port and is sended to the LogAttribute. Nothing Special.
I configured to using RAW but with HTTP it neither works.
I am using the apache/nifi docker image and didn´t changed something in nifi.properties and authorizers.xml but of couse i provide you both:
nifi.properties
authorizers.xml
The Error occuring is this:
WARNING org.apache.nifi.remote.client.PeerSelector#40081613 Unable to refresh Remote Group´s peers due to Unable to communicate with remote Nifi cluster in order to determine which nodes exist in the remote cluster
I hope you can help me. I have wasted too much time with this Problem XD
In nifi.properties you have nifi.web.http.host=f4f40c87b65f so that means the hostname that NiFi is listening for requests on is f4f40c87b65f which means the URL of your RPG must be http://f4f40c87b65f:8080/nifi

How to connect to Elasticsearch server remotely using load balancer

There might be a post which I am looking for. I have very limited time and got requirement at the last moment. I need to push the code to QA and setup elasticsearch with admin team. Please respond me as soon as possible or share the link which has similar post!!.
I have scenario wherein I will have multiple elasticsearch servers, one is hosted on USA , another one in UK and one more server is hosted in India within the same network(companies network) which shares same cluster name. I can set multicast to false and unicast to provide host and IP address information to form a topology.
Now in my application I know that I have to use Transport cLient as follows,
Settings settings = ImmutableSettings.settingsBuilder()
.put("cluster.name", "myClusterName").build();
Client client = new TransportClient(settings)
.addTransportAddress(new InetSocketTransportAddress("host1", 9300))
.addTransportAddress(new InetSocketTransportAddress("host2", 9300));
Following are my concerns,
1) As per the above information, admin team will just provide the single ip address that is load balancer ip address and the loadbalancer will manage the request and response handling .I mean the loadbalance is responsible to redirect to the respective elasticsearch server . Here my question is, Is it okay to use Transport client to connect to the host with the portnumber as follows ,
new TransportClient(settings)
.addTransportAddress(new InetSocketTransportAddress("loadbalancer-ip-address", “loadbalance-port-number”)) ;
If loadbalancer will redirect the request to elastcisearch server what should be the configuration to loadbalancer like, we need to provde all the elasticsearch host or ipaddress details to it? so that at any given point of time , if there is any failure to the master elasticsearch server it will pick another master.
2) What is the best configuration for 4 nodes or elasticsearch servers like, shards , replicas and etc.
Each node will have one primary shard and 1 replicate ? which can be configured in elasticsearch.yml
Please replay me as soon as possible.
Thanks in advance.

Resources