Reliable Binance Smart Chain Node End Point - binance-smart-chain

I am looking to get a end point for BSC (BNB) chain, any reliable providers? I tried with Moralis Speedy nodes, but it frequently gets disconncted if I use it in a node js program. I am using a wss connection to check pending txs.
Any other reliable providers?
Thanks
Sam

You can run your own Binance Smart Chain (BNB Chain) node for the maximum reliability.
Here are some misc. instructions, but better to follow BNB Chain official guide how to run a node.

Related

Is there a way to modify an agent in the buffer zone of Repast HPC Grid, and propagate those changes back to the original agent?

My question is regarding modifying agents on another process. I am using a grid, where I have static agents, one agent per grid cell. Each agent can get its direct neighbours using the Moore2DGridQuery. Then depending on the neighbouring agents states, they can choose one of their neighbours and change their state. Pretty much in the same way with the Humans and Zombies, where a zombie can infect a human. However, since the the agent can modify a direct neighbour, that means the neighbour could be an agent in the buffer zone. So if I want to "infect" and agent in the buffer zone and propagate that back to the original agent, what is the best possible approach to doing this?
There's really no built-in way to do this. Synchronization is from the original to the ghost / buffered agent. The intention is that the non-ghost (non-buffered) agents are active, and would, for example, look around at their neighbors and then change themselves in someway. Is it possible to refactor your agent behavior with that in mind?
You could code your own mechanism for this reverse synchonization but you'd have to use MPI directly, sending agent ids and the updated state, rather than any of Repast HPC's synchronization mechanisms. If you are doing that probably the easiest way is to send all the changes to rank 0 and then have each rank query rank 0 for any applicable changes. Again though that's MPI programming and not strictly related to Repast HPC.

Find the no. of requests made in last minute

Given requests at random times, return the requests from the last 1 minute
This was a question asked in Microsoft technical interview. I could not find any more details about the problem. Can anyone suggest how to approach the problem
This is really an interesting question which serves as a baseline for multiple cloud services which operate upon the idea of throttling. The ideology behind throttling is to limit the number of requests per second from a given client depending upon the throughput he's paying for. An example of such a service is DynamoDB from AWS.
Since cloud services usually have a high level of clients and traffic, one must design a solution-at-scale which works at high load. A queue would indeed be a data-structure of choice to handle such a scenario. However, would enqueuing and dequeuing millions of transaction per minute be efficient? A general way to avoid having a big queue tail is by introducing a precision trade-off through batching.
A blog which defines this concept in depth is this: https://medium.com/#saisandeepmopuri/system-design-rate-limiter-and-data-modelling-9304b0d18250
Let me know if you need any more explanation about the same. Cheers!
Make a queue.
Add new requests to the queue tail.
After every adding and before checking remove too old ones from the queue head.
When checking needed - return queue size

ZooKeeper TIme Synchronization

How does ZooKeeper deal with time ?
Are the Znodes/Clients synchronized ? and How?
Otherwise, how does the algorithm work without time Synchronization?
I see relative Question here, but it does not answer my question
How does Zookeeper synchronize the clock in the cluster
Thanks in Advance
As you may have heard, zookeeper elects a leader for the ensemble. Thereafter, all the write requests go through the leader.
Therefore, leader is responsible for preserving the order of write requests. (Yes, the order is determined by the time at which a request reaches the leader). When all the write requests are served by the leader, no need to worry about synchronization right? Zookeeper doesn't depend on synchronizations of clocks.
How the leader is transmitting new values to the followers is another problem which is addressed through consensus algorithm, ZAB (Zookeeper Atomic Broadcast). This protocol make sure that the majority of the ensemble have updated the new value before sending OK response to the write request.

P2P distrubution - abstract algorithm for supervising peers

I plan to make a system for distributing VM images among several stations using BitTorrent protocol. Current system looks as follows:
|-[room with 20PCs]-
[srv_with_images]-->--[1Gbps-bottlneck]-->--|
|-[2nd room with 20PCs]-
All the PCs at once are downloading images through the 1Gbps bottleneck every night and it takes a lot of time. We plan to use BitTorrent to speed up the distribution of images using peer-to-peer exchange between all the PCs. However there is problem - when image appears on the origin server it starts to act as a single seed from whom all peers are downloading the file simultaneously. So we again fall into the trap of the bottleneck. To speed up the distribution we need to implement (at least we think that we need) an abstract high-level algorithm that:
Ensures the on the beggining when new image arrives only small portion of stations will be downloading the image from origin,
When the small portion will start seeding, rest of, or another bigger portion of PCs will start peering, or they will be peering only from the PCs in class, not from origin,
It shouldnt rely on "static" list of initial peers, as some computers may be offline during the day. We cant assume that any of the computers will always be up&running. A peer may also be turned off anytime.
Are there any specific algorithms that can help us desinging this? The most naive way would be to just keep active servers list somewhere and make some daemon that will be choosing initial peers for each torrent. But maybe there are some more elegant ways to do that kind of stuff??
Another option would be to ensure that only some peers ca download from origin, and rest of the peers do download from each other(but not from origin) - is it possible in BitTorrent protocol?
If you are using bittorrent no special coordination is necessary.
Peers behind the bottleneck can directly talk to each other and share the bandwidth. Using the rarest-first piece picking algorithm will mostly ensure that they download different pieces from the server and then share them with each other.
LSD may help to speed up lan-local discovery but it should work with a normal tracker too if there are no NAT shenanigans in play.

Distributed singleton service for failover

I have an abstract question.
I need a service with fault tolerance. The service only can only be running on one node at a time. This is the key.
With two connected nodes: A and B.
If A is running the service, B must be waiting.
If A is turned off, B should detect this and start the service.
If A is turned on again, A should wait and don't run the service.
etc. (If B is turned off, A starts, if A is turned off B starts)
I have think about heartbeat protocol for sync the status of the nodes and detect timeouts, however there are a lot of race conditions.
I can add a third node with a global lock, but I'm not sure about how to do this.
Anybody know any well-known algorithm to do this? Or better Is there any open source software that lets me control this kind of things?
Thanks
If you can provide some sort of a shared memory between nodes, then there is the classical algorithm that resolves this problem, called Peterson's algorithm.
It is based on two additional variables, called flag and turn. Turn is an integer variable whose value represents an index of node that is allowed to be active at the moment. In other words, turn=1 indicates that node no 1 has right to be active, and other node should wait. In other words, it is his turn to be active - that's where the name comes from.
Flag is a boolean array where flag[i] indicates that i-th node declares itself as ready for service. In your setup, flag[i]=false means that i-th node is down. Key part of the algorithm is that a node which is ready for service (i.e. flag[i] = true) has to wait until he obtains turn.
Algorithm is originally developed for resolving a problem of execution a critical section without conflict. However, in your case a critical section is simply running the service. You just have to ensure that before i-th node is turned off, it sets flag[i] to false. This is definitely a tricky part because if a node crashes, it obviously cannot set any value. I would go here with a some sort of a heartbeat.
Regarding the open source software that resolves similar problems, try searching for "cluster failover". Read about Google's Paxos and Google FileSystem. There are plenty of solutions, but if you want to implement something by yourself, I would try Peterson's algorithm.

Resources