I am trying to understand what is actually in Simple NightShade, the phase0 of the sharding strategy in NEAR. I have read all the relevant medium posted and also the videos in them, I have also search Zulip chat, got some understanding for the coming chunk and block producer selection algo, whic I "think" might be in phase 1. But I cannot understand what was actually implemented
Here are some of the information I got from the medium posts:
There are now 4 shards, only states are sharded, computation is not
sharded (all validators, aka block producers, have to track all
shards). article [near-launches-simple-nightshade-]
Above all, it will be far cheaper to reach consensus — i.e., add a block to NEAR’s chain, since each block will only require a backing of 0.1% of all of the staked coins in NEAR’s ecosystem. In article [how-simple-nightshade-works]
Under the section: What makes simple nightshade unique - In article [primer]
On a physical level, no participant downloads either the full state or the full logical block. Instead, they maintain the state that is connected to the shards for which they validate transactions.
Here are my questions of what is in phase0:
What is meant by all validators track all the shards? As in, do they compute all the state transitions in all the shards for the same block? or do they get assigned to one shard for each epoch but will rotate to all shards?
If the former case is true for Q1, and that the runtime is not aware of sharding, does this mean all validators will have 4 runtime running at the same time to
accommodate for cross-shard transactions? And that how can the validators not download all the states?
How are shards currently split? by merkle trie hash address?
How does this phase require less backing per block?
I saw from this stackoverflow post that now the gas-limit per block is 4 times that of pre-phase0. So this means (in this phase) that validators may be doing 4x the work as before (until chunk producers come in).
I would really appreciate help on understanding this! thank you!
Related
I think I've finally gotten a grasp of the fundamental understanding of how to allocate shards for Elasticsearch. Please correct me if I'm wrong, this is what I've pieced together:
Ideally, there should only exist one shard per index, per node.
The only reason why we would ever want to configure more than
one shard IS to over-allocate for future growth (i.e. adding more
nodes to physically support the data).
Now, assuming what I have above is correct, I then wonder if there are any performance issues or differences if I only had one node with 1 shard versus one node with 5 shards. Can anyone enlighten me on this subject?
"The only reason why we would ever want to configure more than one shard IS to over-allocate for future growth (i.e. adding more nodes to physically support the data)."
Not necessarily so. Having more shards helps parallelise your queries and helps them finish faster, but after a bit it can be counterproductive as too many shards will mean overheads in merging the individual shard responses and time spent in queuing and such things.
"one node with 1 shard versus one node with 5 shards"
It depends on what your use case is but you should see some performance gain for bigger queries, with 5 shards.
I believe it depends on the size of the shards. For instance, on the elastic website, they say the following:
"Querying lots of small shards will make the processing per shard
faster, but as many more tasks need to be queued up and processed in
sequence, it is not necessarily going to be faster than querying a
smaller number of larger shards. Having lots of small shards can also
reduce the query throughput if there are multiple concurrent queries."
https://www.elastic.co/blog/how-many-shards-should-i-have-in-my-elasticsearch-cluster
In practice I have found that using some exploratory testing with realistic queries helps me determine more definitively how I should move forward with my architecture. It really depends on the use case. As was stated previously however, there comes a point where you can sort of "over optimize" and it ends up cancelling out any noticible gains you may have otherwise obtained by doing the opposite solution.
To be succinct, one shard per index, per node is a fine practice. But if you find yourself needing more, then just assess your use case first and determine if additional shards are truly necessary.
I'm working on a booking system with a single RDBMS. This system has units (products) with several characteristics (attributes) like: location, size [m2], has sea view, has air conditioner…
On the top of that there is pricing with its prices for different periods e.g. 1/1/2018 – 1/4/2018 -> 30$ ... Also, there is capacity with its own periods 1/8/2017 – 1/6/2018… Availability which is the same as capacity.
Each price can have its own type: per person, per stay, per item… There are restrictions for different age groups, extra bed, …
We are talking about 100k potential units. The end user can make request to search all units in several countries, for two adults and children of 3 and 7 years, for period 1/1/2018 – 1/8/2018, where are 2 rooms with one king size bed and one single bed + one extra bed. Also, there can be other rules which are handled by rule engine.
In classical approach filtering would be done in several iterations, trying to eliminate as much as possible in each iteration. There could be done several tables with semi results which must be synchronized with every change when something has been changed through administration.
Recently I was reading about Hadoop and Storm which are highly scalable and provide parallelism. I was wondering if this kind of technology is suitable for solving described problem. Main idea is to write “one method” which validates each unit, if satisfies given filter search. Later this function is easy to extend with additional logic. Each cluster could take its own portion of the load. If there are 10 cluster, each of them could process 10k units.
In Cloudera tutorial there is a moment when with Sqoop, content from RDBMS has been transferred to HDFS. This process takes some time, so it seems it’s not a good approach to solve this problem. Given problem is highly deterministic and it requires to have immediate synchronization and to operates with fresh data. Maybe to use in some streaming service and to parallelly write into HDFS and RDBMS? Do you recommend some other technology like Storm?
What could be possible architecture, starting point, to satisfy all requirements to solve this problem.
Please point me into right direction if this problem is improper for the site.
My question is mostly based on the following article:
https://qbox.io/blog/optimizing-elasticsearch-how-many-shards-per-index
The article advises against having multiple shards per node for two reasons:
Each shard is essentially a Lucene index, it consumes file handles, memory, and CPU resources
Each search request will touch a copy of every shard in the index. Contention arises and performance decreases when the shards are competing for the same hardware resources
The article advocates the use of rolling indices for indices that see many writes and fewer reads.
Questions:
Do the problems of resource consumption by Lucene indices arise if the old indices are left open?
Do the problems of contention arise when searching over a large time range involving many indices and hence many shards?
How does searching many small indices compare to searching one large one?
I should mention that in our particular case, there is only one ES node though of course generally applicable answers will be more useful to SO readers.
It's very difficult to spit out general best practices and guidelines when it comes to cluster sizing as it depends on so many factors. If you ask five ES experts, you'll get ten different answers.
After several years of tinkering and fiddling around ES, I've found out that what works best for me is always to start small (one node, how many indices your app needs and one shard per index), load a representative data set (ideally your full data set) and load test to death. Your load testing scenarii should represent the real maximum load you're experiencing (or expecting) in your production environment during peak hours.
Increase the capacity of your cluster (add shard, add nodes, tune knobs, etc) until your load test pass and make sure to increase your capacity by a few more percent in order to allow for future growth. You don't want your production to be fine now, you want it to be fine in a year from now. Of course, it will depend on how fast your data will grow and it's very unlikely that you can predict with 100% certainty what will happen in a year from now. For that reason, as soon as my load test pass, if I expect a large exponential data growth, I usually increase the capacity by 50% more percent, knowing that I will have to revisit my cluster topology within a few month or a year.
So to answer your questions:
Yes, if old indices are left open, they will consume resources.
Yes, the more indices you search, the more resources you will need in order to go through every shard of every index. Be careful with aliases spanning many, many rolling indices (especially on a single node)
This is too broad to answer, as it again depends on the amount of data we're talking about and on what kind of query you're sending, whether it uses aggregation, sorting and/or scripting, etc
Do the problems of resource consumption by Lucene indices arise if the old indices are left open?
Yes.
Do the problems of contention arise when searching over a large time range involving many indices and hence many shards?
Yes.
How does searching many small indices compare to searching one large one?
When ES searches an index it will pick up one copy of each shard (be it replica or primary) and asks that copy to run the query on its own set of data. Searching a shard will use one thread from the search threadpool the node has (the threadpool is per node). One thread basically means one CPU core. If your node has 8 cores then at any given time the node can search concurrently 8 shards.
Imagine you have 100 shards on that node and your query will want to search all of them. ES will initiate the search and all 100 shards will compete for the 8 cores so some shards will have to wait some amount of time (microseconds, milliseconds etc) to get their share of those 8 cores. Having many shards means less documents on each and, thus, potentially a faster response time from each. But then the node that initiated the request needs to gather all the shards' responses and aggregate the final result. So, the response will be ready when the slowest shard finally responds with its set of results.
On the other hand, if you have a big index with very few shards, there is not so much contention for those CPU cores. But the shards having a lot of work to do individually, it can take more time to return back the individual result.
When choosing the number of shards many aspects need to be considered. But, for some rough guidelines yes, 30GB per shard is a good limit. But this won't work for everyone and for every use case and the article fails to mention that. If, for example, your index is using parent/child relationships those 30GB per shard might be too much and the response time of a single shard can be too slow.
You took this out of the context: "The article advises against having multiple shards per node". No, the article advises one to think about the aspects of structuring the indices shards before hand. One important step here is the testing one. Please, test your data before deciding how many shards you need.
You mentioned in the post "rolling indices", and I assume time-based indices. In this case, one question is about the retention period (for how long you need the data). Based on the answer to this question you can determine how many indices you'll have. Knowing how many indices you'll have gives you the total number of shards you'll have.
Also, with rolling indices, you need to take care of deleting the expired indices. Have a look at Curator for this.
I have an elasticsearch setup with 192 active indices ranging from a few hundred mb to possibly 5gb each. I read that for a logstash use case with 1gb indices you should only use 1 shard. The difference with my setup is that I will be having more users (estimate of up to 100) expecting a quick response time. I intend to have 1 replica for reliability.
Will having 1 shard per index still be appropriate for my use case?
In a word: yes.
The need to create multiple primary shards derives from the need to isolate documents, extreme counts (e.g., when you're in the billions of documents volume), or to improve write throughput (write documents across more places, thereby reducing individual burden).
In practice, you want to shard based on your use case, unless you're one of those first two scenarios (isolation or extreme counts).
Are you read heavy?
Are you write heavy? (Less common, but it does happen)
If you're read heavy, as most use cases are, then having fewer shards will help you by limiting the request size (fewer places to look). Given that your shard sizes are also relatively small (I'd consider anything under 5 GB to be relatively small), you can easily get away with having a single primary shard and it should benefit your search performance by doing so.
Indexes that share the same mappings, but are also tiny ("few hundred MBs"), should likely be combined if you search across them. If they're independent, then it really makes no difference and the isolation sounds like good practice at the expense of slightly bloating your cluster state (with each index).
Have a look at this blog: https://qbox.io/blog/optimizing-elasticsearch-how-many-shards-per-index. He has a lot of good pointers to sharding and shard sizing.
However, the question you really should be asking yourself is: How easy is it to change? When it comes to sizing and scalability, the answer often is "it depends" - and the real question is: How quickly can you reconfigure?
This could e.g. mean that you design you application in a way, that allows quick re-spooling of data into a new index, that you use aliases so that you can in fact change these things, where your data lies (not just in Elastic, I hope) etc.
By building a system - from the start - so that you can quickly rebuild indicies enables you to experiment with sizes - and more importantly - change them as your need changes.
I have a cluster application, which is divided into a controller and a bunch of workers. The controller runs on a dedicated host, the workers phone in over the network and get handed jobs, so far so normal. (Basically the "divide-and-conquer pipeline" from the zeromq manual, with job-specific wrinkles. That's not important right now.)
The controller's core data structure is unordered_map<string, queue<string>> in pseudo-C++ (the controller is actually implemented in Python, but I am open to the possibility of rewriting it in something else). The strings in the queues define jobs, and the keys of the map are a categorization of the jobs. The controller is seeded with a set of jobs; when a worker starts up, the controller removes one string from one of the queues and hands it out as the worker's first job. The worker may crash during the run, in which case the job gets put back on the appropriate queue (there is an ancillary table of outstanding jobs). If it completes the job successfully, it will send back a list of new job-strings, which the controller will sort into the appropriate queues. Then it will pull another string off some queue and send it to the worker as its next job; usually, but not always, it will pick the same queue as the previous job for that worker.
Now, the question. This data structure currently sits entirely in main memory, which was fine for small-scale test runs, but at full scale is eating all available RAM on the controller, all by itself. And the controller has several other tasks to accomplish, so that's no good.
What approach should I take? So far, I have considered:
a) to convert this to a primarily-on-disk data structure. It could be cached in RAM to some extent for efficiency, but jobs take tens of seconds to complete, so it's okay if it's not that efficient,
b) using a relational database - e.g. SQLite, (but SQL schemas are a very poor fit AFAICT),
c) using a NoSQL database with persistency support, e.g. Redis (data structure maps over trivially, but this still appears very RAM-centric to make me feel confident that the memory-hog problem will actually go away)
Concrete numbers: For a full-scale run, there will be between one and ten million keys in the hash, and less than 100 entries in each queue. String length varies wildly but is unlikely to be more than 250-ish bytes. So, a hypothetical (impossible) zero-overhead data structure would require 234 – 237 bytes of storage.
Ultimately, it all boils down on how you define efficiency needed on part of the controller -- e.g. response times, throughput, memory consumption, disk consumption, scalability... These properties are directly or indirectly related to:
number of requests the controller needs to handle per second (throughput)
acceptable response times
future growth expectations
From your options, here's how I'd evaluate each option:
a) to convert this to a primarily-on-disk data structure. It could be
cached in RAM to some extent for efficiency, but jobs take tens of
seconds to complete, so it's okay if it's not that efficient,
Given the current memory hog requirement, some form of persistent storage seems a reaonsable choice. Caching comes into play if there is a repeatable access pattern, say the same queue is accessed over and over again -- otherwise, caching is likely not to help.
This option makes sense if 1) you cannot find a database that maps trivially to your data structure (unlikely), 2) for some other reason you want to have your own on-disk format, e.g. you find that converting to a database is too much overhead (again, unlikely).
One alternative to databases is to look at persistent queues (e.g. using a RabbitMQ backing store), but I'm not sure what the per-queue or overall size limits are.
b) using a relational database - e.g. SQLite, (but SQL schemas are a
very poor fit AFAICT),
As you mention, SQL is probably not a good fit for your requirements, even though you could surely map your data structure to a relational model somehow.
However, NoSQL databases like MongoDB or CouchDB seem much more appropriate. Either way, a database of some sort seems viable as long as they can meet your throughput requirement. Many if not most NoSQL databases are also a good choice from a scalability perspective, as they include support for sharding data across multiple machines.
c) using a NoSQL database with persistency support, e.g. Redis (data
structure maps over trivially, but this still appears very RAM-centric
to make me feel confident that the memory-hog problem will actually go
away)
An in-memory database like Redis doesn't solve the memory hog problem, unless you set up a cluster of machines that each holds a part of the overall data. This makes sense only if keeping all data in-memory is needed due to low response times requirements. Yet, given the nature of your jobs, taking tens of seconds to complete, response times, respective to workers, hardly matter.
If you find, however, that response times do matter, Redis would be a good choice, as it handles partitioning trivially using either client-side consistent-hashing or at the cluster level, thus also supporting scalability scenarios.
In any case
Before you choose a solution, be sure to clarify your requirements. You mention you want an efficient solution. Since efficiency can only be gauged against some set of requirements, here's the list of questions I would try to answer first:
*Requirements
how many jobs are expected to complete, say per minute or per hour?
how many workers are needed to do so?
concluding from that:
what is the expected load in requestes/per second, and
what response times are expected on part of the controller (handing out jobs, receiving results)?
And looking into the future:
will the workload increase, i.e. does your solution need to scale up (more jobs per time unit, more more data per job?)
will there be a need for persistency of jobs and results, e.g. for auditing purposes?
Again, concluding from that,
how will this influence the number of workers?
what effect will it have on the number of requests/second on part of the controller?
With these answers, you will find yourself in a better position to choose a solution.
I would look into a message queue like RabbitMQ. This way it will first fill up the RAM and then use the disk. I have up to 500,000,000 objects in queues on a single server and it's just plugging away.
RabbitMQ works on Windows and Linux and has simple connectors/SDKs to about any kind of language.
https://www.rabbitmq.com/