Oracle RAC and SCAN Listeners - oracle

I have a 4 node RAC architecture and SCAN Listeners are running on 3 nodes. As Oracle recommends minimum of 3 SCAN Listeners in the below blog.
http://satya-racdba.blogspot.in/2012/09/scan-in-oracle-rac-11g-r2.html
But is it necessary to configure SCAN on the 4th Listener as well?
Is the 4th Node being picked up by the SCAN listeners?
How do I test it?

Oracle does not reccomends miiumum of three scna listeners. Oracle states:
SCAN is a domain name registered to at least one and up to three IP addresses, either in the domain name service (DNS) or the Grid Naming Service (GNS).
You can find this information in Metalink Note:
Grid Infrastructure Single Client Access Name (SCAN) Explained (Doc ID 887522.1)
In the image you can see an exampel architecture of oracle SCAN:
A new set of cluster processes called scan listeners will run on three nodes in a cluster (or all nodes if there are less than 3). If you have more than three nodes, regardless of the number of nodes you have, there will be at most three scan listeners. The database registers with the SCAN listener through the remote listener parameter in the init.ora/spfile. If any of these clustered processes fail, they are automatically restarted on a new node.
If you have to do an installation you'd better to review the Oralce documentation:
D Oracle Grid Infrastructure for a Cluster Installation Concepts

Related

Multiple locations in Google Cloud SQL

I need to use SQL on multiple different locations. The best option will be to set some databases (or even some records, like tagging on Mongo) in different locations. Is it possible to achieve on Google SQL?
There maybe two scenarios -
One single Cloud SQL instance in multiple locations
Different Cloud SQL instances in multiple locations
When you create a Cloud SQL instance, you choose a region where the instance and its data are stored. To reduce latency and increase availability, choose the same region for your data and your Compute Engine instances, standard environment apps, and other services.
Location types are of mainly two types, regional location i.e. a specific geographic place and multi-regional location which contains at least two geographic places. Multi-regional locations are only used for backup operations in Cloud SQL.
You choose a location when you first create the instance. The location can't be changed after the instance is created.
One single region consists of many small data centers called zones. While creating a Cloud SQL instance you can specify the instance to be available in a single zone or in two different zones within the selected region. Selecting the Cloud SQL instance to be in two different zones is called High Availability (HA) configuration.
The purpose of an HA configuration is to reduce downtime when a zone or instance becomes unavailable which might happen during a zonal outage, or when an instance becomes corrupted. With HA, your data continues to be available to client applications.
The HA configuration provides data redundancy. A Cloud SQL instance configured for HA is also called a regional instance and is located in a primary and secondary zone within the configured region. Within a regional instance, the configuration is made up of a primary instance and a standby instance.
So considering the first scenario when you say if a Cloud SQL instance can be located in multiple locations then it is yes if you consider different zones as different locations (This is correct as two zones are physically separated data centers within a single GCP region). But it can only be located in two zones and for that you have to configure High Availability(HA) for the instance.
For the second scenario, you can always create different Cloud SQL instances in different regions.
You can go through instance locations in Cloud SQL and overview of HA configuration to have a brief understanding of the above.
There is another option in Cloud SQL called read replicas.
You use a read replica to offload work from a Cloud SQL instance. The read replica is an exact copy of the primary instance. Data and other changes on the primary instance are updated in almost real time on the read replica.
Read replicas are read-only; you cannot write to them. The read replica processes queries, read requests, and analytics traffic, thus reducing the load on the primary instance.
If you want the data to be available in multiple locations you may consider using cross-region read replicas.
Cross-region replication lets you create a read replica in a different region from the primary instance.
Cross-region read replicas has many advantages -
Improve read performance by making replicas available closer to your
application's region.
Provide additional disaster recovery capability to guard against a
regional failure.
Let you migrate data from one region to another.

Is there someway to make distributed table still work for query when one of the shard servers down?

There is a common case that we will update the clickhouse's config which must restart clickhouse to take effect. And during the restarting, the query services depend on clickhouse's distributed table will return the exception due to disconnecting with the restarting server.
So,as the title says, what I want is the way to make distributed table still work for query when one of the shard server down. Thanks.
I see two ways:
Since this server failure is transient, you can refactor your server-side code by adding retry-policy to your request (for c# I would recommend use Polly)
Use the proxy (load-balancer) to CH (for example chproxy).
UPDATE
When one node is restarting in a cluster the distributed table created over replicated tables should be accessible (of course request shouldn't be sent to restarted node).
Availability of data is achieved by using replication, therefore, you need to create Replicated*-tables over materialized view and then create Distributed-tables over Replicated*-tables.
Please look at the articles CH Data Distribution, Distributed vs Shard vs Replicated..
and as a working example (it is not your case) to CH Circular cluster topology.

accessing database between same instance of a micro service

In my project, I have a microservice [say A] and it has a SQL database. We have a 5 node cluster and each of the node this microservice runs. So, We have 5 instances running of service A on the cluster. Now, suppose there is a select query in a particular function of the microservice that is retrieving data from the database. Now, since 5 instance are running, all the 5 instance will use the same query and will work on the same data. Is there any way, in which, we can divide data among 5 instances of service A.
Application clustering is different to database clustering. You cannot "divide" data among the 5 instances of application services since all application instances require a similar set of data to function (unless your application is designed to work on a subset of the data, i.e. each application instance is used to serve a specific list of countries, then you might be able to break the data up by country).
You can look into clustering at the database level for ideas on how you can cluster at the SQL level: https://www.brentozar.com/archive/2012/02/introduction-sql-server-clusters/ .

Oracle Active Data Guard: FSFO Observer

It is recommended by Oracle to use Observer (for fast start failover) in separate node to observe Primary/Standby nodes.
Could it be possible to use observer on Primary/Standby node? Otherwise a separate node will be required.
It is possible to run observer process on Primary/Standby nodes. Then what happen when the standby or primary node goes down? The observer process also goes down with the primary site. If you run it on standby site it works only for one time, when standby become primary then the same problem arises.
So as recommended by Oracle it must be running on third site (separate node) from where it can observe both primary and standby site. When primary goes down it can initiate fail over process.

Event based communication between Hybris cluster nodes

In what use-case would there be a need to communicate between cluster nodes? The ClusterAwareEvent interface offers the possibility to specify a source node and a target node, but shouldn't cluster nodes be as independent of each other as possible?
Well there are a few reasons why they would need to communicate or rather why you would want them to communicate.
Firstly there is a concept called the Cache Invalidation Concept, which is where each cluster member holds only valid data, but can communicate with one another by TCP or UDP to mark some cache entries as invalid. For example if a database item has been changed.
A basic overview of the invalidation process:
Product description is changed. Therefore, all cache entries
referring to the product are invalid.
This modification to the description is done on a node, which now has to send a notification to all cluster nodes that the data is invalid.
Nodes that hold the product in their cache discard the cached data of
the product and re-retrieve the product from the database the next
time the product is used.
Other features of clustering within Hybris where you would want to communicate with other nodes would be:
Load Balancing
Semi-Session Failover - This allows sessions(sticky sessions) to transfer to a different cluster. Useful if say a server is going down for maintenance or a hardware defect.
These would be the main reasons I can think of off the top of my head for why you would want clusters to communicate.

Resources