If I shard my microservice data by enduser/tenant and each server has a subset of the total data, how do I query for data across all servers? - microservices

Imagine a highly scalable architecture where each tenant is sharded and distributed by region and availability zone and each server holds a subset of total data. There is also redundancy, there are physical shards that host the same logical shard.
This works great, clients can use a map/reduce style to retrieve all data when handling requests that return extreme amounts of data if they know all the logical shards that a user is assigned to. This solves the problem if the amount of data for a user is larger than the capacity of any individual server's storage or memory or compute.
My question then is, if the data for a noun microservice is isolated and sharded across multiple servers and every server hosts a different subset of users or tenants, how do I create a view of all the objects in the system? I've effectively denormalised for performance but that means there is extreme read amplification to see the total number of objects in the system.
If I wanted a GUI that would show me all the noun objects of each microservice, and there is N physical shards and M noun services, I would need to create N×M requests to fetch all the data, sort it for presentation. It would be incredibly inefficient.
I'm thinking for more of an administration GUI perspective. Nobody wants to log into X microservice or microservice frontends to manage all the data in the system.
Is this a usecase for data warehousing or data lake?

Is this a usecase for data warehousing or data lake?
Yes. Replicating data into a central repository (Operational Data Store, Data Lake, or Data Warehouse) is common pattern in microservice and multi-tenant application architectures.

Related

How can I shard request using Aeron Cluster

I'd like to understand the capabilities of Aeron Clusters with respect to sharing requests across different back-end cluster application instances. I am thinking of something similar to partitions in Kafka where distinct back-end consumer processes the workload in independent processes. There should be a partition key which defines how to find the partition, or it could be a consumer provided hash, etc.
I read this article but it was not much help https://aeroncookbook.com/aeron-cluster/on-sharding/
So far I have only been reading the documentation and the API documents.
I also read the aeoroncookbook site: https://aeroncookbook.com/aeron-cluster/on-sharding/
Could someone provide an example of this if it is possible? The cookbook does not really do much good here because it imposes a similar problem but with dependencies between the shards.
Aeron Cluster does not directly support sharding. Its primary goal is redundant copies of the same data across multiple nodes. Sharding would need to be something that layered on via your own application logic. An approach would be to run multiple clusters and utilize a key to partition data across the clusters, then within your client application run multiple cluster clients (one for each cluster) and select the approach client based on the data that you are interacting with.

How to do small queries efficiently in Clickhouse

In our deployment, there are one thousand shards. The insertions are done via a distributed table with sharding jumpConsistentHash(colX, 1000). When I query for rows with colX=... and turn on send_logs_level='trace', I see the query is sent to all shards and is executed on each shard. This is limiting our QPS (queries per second). Checking with Clickhouse document, it states:
SELECT queries are sent to all the shards and work regardless of how data is distributed across the shards (they can be distributed completely randomly).
When you add a new shard, you don’t have to transfer the old data to it.
You can write new data with a heavier weight – the data will be distributed slightly unevenly, but queries will work correctly and efficiently.
You should be concerned about the sharding scheme in the following cases:
* Queries are used that require joining data (IN or JOIN) by a specific key. If data is sharded by this key, you can use local IN or JOIN instead of GLOBAL IN or GLOBAL JOIN, which is much more efficient.
* A large number of servers is used (hundreds or more) with a large number of small queries (queries of individual clients - websites, advertisers, or partners).
In order for the small queries to not affect the entire cluster, it makes sense to locate data for a single client on a single shard.
Alternatively, as we’ve done in Yandex.Metrica, you can set up bi-level sharding: divide the entire cluster into “layers”, where a layer may consist of multiple shards.
Data for a single client is located on a single layer, but shards can be added to a layer as necessary, and data is randomly distributed within them.
Distributed tables are created for each layer, and a single shared distributed table is created for global queries.
It seems there is a solution for such small queries as in our case (the second bullet above), but I am not clear about the point. Does it mean when querying for a specific query with predicate colX=..., I need to find the corresponding "layer" that contains its rows and then query on the corresponding distributed table for this layer?
Is there a way to query on the global distributed table for these small queries?

How is data consistency handled in disturbed caching using Oracle coherence where each cluster node is responsible only for a piece of data?

How is data consistency handled in the distributed cache using Oracle coherence where each cluster node is responsible only for a piece of data?
I also have confusion about below
Are cluster nodes on different servers and each has its own local cache?
For instance say I have node A, with cache "a" and node B and with cache "b", is the database on a
separate server D?
When is an update, is update first made on D and written back to cache a and b, or how does data consistency work.
Explanation in laymen terms will be helpful as I am new to Oracle Cohernace
Thank you!
Coherence uses two different distribution mechanisms: full replication and data partitioning; each distributed cache is configured to use one of these. Most caches in most large systems use the partitioned model, because they scale very well, adding storage with each server and maintaining very high performance even up to hundreds of servers.
The Coherence software architecture is service based; when Coherence starts, it first creates a local service for managing clustering, and that service communicates over the network to locate and then join (or create, if it is the first server running) the cluster.
If you have any partitioned caches, then those are managed by partitioned cache service(s). A partitioned cache service coordinates across the cluster to manage the entirety of the partitioned cache. It does this dynamically, starting by dividing the responsibilities of data management evenly across all of the storage enabled nodes. The data in the cache(s) is partitioned, which means "sliced up", so that some values will go to server 1, some values to server 2, etc. The data ownership model prevents any confusion about who owns what, so even if a message gets delayed on the network and ends up at the wrong server, no damage is done, and the system self-corrects. If a server dies, whatever data (slices) it was managing is backed up by one or more other server, and the servers work together to ensure that new back-ups are made for any data that does not have the desired number of backups. It is a dynamic system.
There are several different APIs provided to an application, starting with an API as simple as using a hash map (in fact it is the Java Map API).

How yandex implemented 2 layered sharding

In the clickhouse documentation, there is a mention of Yandex.Metrica, implementing Bi-Level sharding.
"Alternatively, as we've done in Yandex.Metrica, you can set up bi-level sharding: divide the entire cluster into "layers", where a layer may consist of multiple shards. Data for a single client is located on a single layer, but shards can be added to a layer as necessary, and data is randomly distributed within them."
Is there a detailed implementation for this sharding scheme, documented some place.
Logically Yandex.Metrica has only one high-cardinality ID column that serves as main sharding key.
By default SELECTs from table with Distributed engine requests partial results from one replica of each shard.
If you have like hundreds servers or more, it's a lot of network communication to query all shards (probably 1/2 or 1/3 of all servers) which might introduce more latency than the actual query execution.
The reason for this behavior is that ClickHouse allows to write data directly to shards (bypassing Distributed engine and it's configured sharding key) and the application that does it is not forced to comply with sharding key of Distributed table (it can chose differently to spread data more evenly or by whatever other reason).
So the idea of that bi-level sharding is to split large cluster into smaller sub-clusters (10-20 servers each) and make most SELECT queries go through a Distributed tables that are configured against sub-clusters, thus making less network communication necessary and lowering the impact of possible stragglers.
Global Distributed tables for whole large cluster is also configured for some ad-hoc or overview style queries, but they are not so frequent and have lower latency requirements.
This still leaves freedom for the application that writes data to balance it arbitrarily between shards forming sub-cluster (by writing directly to them).
But to make this all work together applications that write and read data need to have a consistent mapping from whatever high-cardinality ID is used (CounterID in case of Metrica) to sub-cluster ID and hostnames it consists of. Metrica stores this mapping in MySQL, but in other cases something else might look more applicable.
Alternative approach is to use "optimize_skip_unused_shards" setting that makes SELECT queries which have a condition on sharding key of Distributed table to skip shards that are not supposed to have data. It introduces the requirement for data to be distributed between shards exactly as if it was written through this Distributed table or the report will not include some misplaced data.

HBase Replication - Replicate data in 3 data centers

I our application we are having data from 3 different countries and we are persisting data in HBase.
In each country, we will be keeping data of all the 3 countries.
To achieve this, is it possible that we create our Hadoop cluster using data centers in all these 3 countries and we keep data replication as 3. So due to rack-awareness feature, our data will get auto replicated in all the 3 countries?
Any pointers will be of great help.
Thanks
You can’t have HBASE cluster across countries. This won’t work because of latency, failover problems, network issues, etc.
A good option would be to have 3 clusters, one HBase table per country and sync the tables between clusters as proposed above
As far as I know only Google has successfully implemented a multi-country database providing both consistency and availability: Spanner. But the key elements of the solution are: a private physical network between the Data Centers and their own implementation of NTP which guarantee that all servers across the world have the same clock with just a few milliseconds precision.
This solution looks theoretically feasible but writes may become pretty slow as data needs to replicated to 3 nodes located in different geographies. It needs to be tried out and check whether the latency is within tolerable limit.
Another option could be, to have three different HBase clusters at three locations and design tables in such a way that tables from one HBase cluster could be copied to another one during night hours to keep the data in sync daily. In this case, an HBase cluster will have current data from it's own location but the data from other two cities will lag by a day.

Resources