How to choose the fastest Websocket serve from ten Websocket serve - websocket

There are ten websocket serves all, my leader wants me to find the fastest one from it,but I have no idea,who can help me,thank you very much.
enter image description here
now my question is how to get the fastest one,and how to prove that is the fastest

Related

How does Envoy edge proxy keep count of number of request per host

I am curious about how does envoy store or manage the active request for each host and then use them for the Least Request First load balancing.
Since the documentation of Envoy states that it picks N hosts randomly and then selects the least requested from them. This is algorithm gives O(1) complexity and very good results. So if envoy stores all active request count why doesn't it use a algorithm which may work in O(logn) to find the least requested host. Which could be implemented with a suitable data structure like segment trees.
I have read through the documentation and tried to look through the source code. But was unable to find what i was looking for .
Documentation
I think this comment answers your question:
// As with tryChooseLocalLocalityHosts, this can be refactored for efficiency
// but O(N) is good enough for now given the expected number of priorities is
// small.
So it's about priorities in the backlog. Feel free to jump in and improve it ;)

Cache distribution exercise as presented at googles HashCode 2017

I'm currently trying to find an efficient solution to the problem stated in this document Hash Code 2017 - Streaming Videos.
TLDR; To minimize latency of youtube videos, cache serves with limited
capacity are used. However not every cache is connected to every
endpoint and not every entpoint requests the same videos. The goal is
to minimize overall latency of the whole network.
My approach was to simply iterate through each endpoint and each requests block and find the optimal cache with the most latency reduction per video size (I'll just call it request density).
When the optimal cache has already reached its capacity, I try to store it in exchange for videos with less request density or use a different cache if there is no other possibility (notice that the data center is also a cache in my model).
def distribute_video_requests(endpoint, excluding_caches=set()):
caches = endpoint.cache_connections - excluding_caches
for vr in endpoint.video_requests:
optimal_cache = find_optimum(caches, vr)
exchange = try_put(optimal_cache, vr)
if exchange["conflicting"]:
excluding_caches.add(optimal_cache)
for elm in exchange["affected"]:
distribute_video_requests(elm["from"], excluding_caches)
for ep in endpoints:
distribute_video_requests(ep)
You could visualize it as Brazil nut effect where the video requests are pieces with different density which are sorted in a stack.
The reason I'm explaining all of this is because I can't realy tell if my solution is decent and if it isn't: what are better approaches for this?
If somebody gives you a proposed solution, one thing you could do is pick on one of the cache servers, empty it, and then try and work out the best way to fill it up to get a solution at least as good as the proposed one.
I think this is the knapsack problem, so it will not be easy to find an efficient exact solution to this, or to the original problem.
There are decent approximations to the knapsack problem so I think it might be worth programming it up and throwing it at the solutions from your method. If it can't improve on the original solution much, congratulations! If it can, you have another solution method - keep running the knapsack problem to adjust the contents of each cache server until you can't find any more improvements.
I've actually solved this problem using basic OOP, stream based data reading and writing, and basic loops.
My solution is infact available at: https://github.com/TheBlackPlague/YouTubeCache .
The solution is coded in PHP just because I wanted an interpreted language to do this quickly in rather than a compiled one. However, this can easily be extended to any language to speed up execution times.

Game Server suggestions

EDIT
I'm completely rewriting this since I realized it wasn't very clear what I needed.
So, I'm going to implement an online game. The idea is quite simple. All players have to answer the same set of questions that is downloaded from the server. To answer each question the player has a given amount of time that depends on question difficulty. The questions are presented one at a time. After the time to answer the current question elapses the next one is presented to the player. After the last question the client should display the leaderboard with the scores of all currently online players. The leaderboard is (of course) computed on server and clients should download it when the game finishes.
OK that's the idea. What I require is some suggestions on how to implement the whole client-server communication. I don't need details but just some ideas. Most important I'm not sure how client-server time-sync could work. It's important that all players have the same amount of time to answer each question. I also have a solution that is very simple but I'm not sure about possible pitfalls. What I have in mind is that when the player first connects (or when a new game starts) the client downloads the whole list of questions for current game. Also some time-sync messages are exchanged to get current game time. Then after questions and time-sync are known at the client, a local timer is started and the game runs completely offline. When the game finishes each client sends its own score/result to the server. When the leaderboard is ready the server sends it back to all clients. Once again a local timer could be used to know when a new game is starting and to download the new list of questions.
Please post your suggestions and comment my solution. Thanks

Random peer selection algorithm in peer-to-peer game?

I'm developing a poker game for iPhone/iPad which uses Apple's matchmaking service. I'll be using a client-server topology where the dealer is the server. With each hand there will be a new server/dealer. However, before the initial dealer/server selection, the game uses peer-to-peer topology, which leaves me with my dilemma.
How do I get all the players/peers to agree on one random peer to be the initial dealer/server, quickly and efficiently?
I'm currently troubleshooting my own method by which I have each peer broadcast a random number. After all numbers have been received, they are sorted and the peer with the lowest number is the initial dealer. However, the issues I'm having with (duplicate numbers, etc) have prompted me to find a better solution.
Any help would be greatly appreciated.
You need to learn about the Paxos algorithm (i.e., election of a leader).

Ramp up/down algorithm for user load testing

I'm working on a user load testing application for web servers and I'm trying to implement a feature for automatically ramping up the maximum number of "users" that a server can handle. I want to spawn test users until some threshold for the average response time and/or http request failure ratio is met, and then I want to kill/spawn users until a stable state just below the thresholds is found.
Essentially, I want to find the maximum stable number of concurrent users that still meets the requirements, as fast as possible.
I can of course figure out an algorithm for this myself but I'm thinking that there might be existing ramp up/ramp down algorithms that I could use. If anyone has knowledge on this I would love if you could point me in the right direction!
Thanks!
This depends a lot on what's going on and if the system begins to decay gradually or if there is a discrete drop in performance (e.g. "healthy" -> "dead").
In the second case, there's no feedback to indicate whether or not you're approaching the boundary, so you will need to first find a point that exceeds the threshold and jump between that and the largest value that doesn't exceed the threshold. You might simplify this with 2 (or more) separate servers. Splitting in the middle is pretty much the fastest way feasible, though if you have 10 servers, you could divide into 10 steps at each iteration.
If you get some feedback, then you're looking for a method that incorporates this. You may find that the Nelder-Mead algorithm is suitable. It's fairly easy to implement, but you'll likely find implementations in any language of interest.

Resources