i want to measure the latency and throuput of a private ethereum Blockchain (hyperledger besu),
is there any solution ?
thank you
i searched on google for a solution, found only articles describing what latency and throuput means, however i want a solution on how to actually capture it and measure it.
I invite you to try Caliper. I think it's the best tool for this operation.
Related
The existing questions are related to real-time needs; the answers are API-based solutions.
These solutions are not well suited to do a large batch task.
Convert Latitude/Longitude To Address
This solution suggests caching google results for a short time:
what is the best geocode to get long and lot for a street address?
Is there a better solution? If there is not a good open-source dataset available for this then do you know of a good paid service?
Open Street Map has the Nominatim database which might do what you need.
I have a GTFS feed defined for my fleet. This tells the routes, trips and timings. Now using this GTFS feed, is it possible to optimize the utilization of my fleet's vehicles? Can I schedule the vehicles such that once it completes a trip, it can be assigned to serve a trip of another route?
I have constriants such as no vehicle should be running more than 12 hours, every vehicle will undergo a health check for 2 hrs, etc.
To me this sounds like a case of the Knapsack problem.
If such a project exists, kindly let me know. Is there an algorithm that can solve this problem?
Thanks,
Yash
You're asking a question that is typically assigned to a scheduling system, one which would produce GTFS files from the get-go. In smaller systems, this actually is not difficult to do, but as the number of routes (or "trip patterns") increases, the process gets more complex.
Before you undertake any project like this, I suggest reading over the TCRP manual on scheduling, paying close attention to the terms "cycle time," "headway," and "interlining."
While I'd love to help more, I don't have time right now to get into the specifics. I performed a similar analysis with automatically collected cycle times on a limited set of routes in my masters thesis, starting on page 118.
I hope this helps. If you have any follow-up questions, post a comment and I'll respond when I have time.
Now i'm currently working under the voice Analysis based project, In this I have to Record the voice signal which are having the frequency more than 17000 Hertz. below this rate, All the signal should be neglected. I hope there will be a voice filter to record this rate, If anybody having any ideas regarding this means please assist me that will be very helpful for me, Thank you..
Sounds to me like you would like to implement a high-pass filter.
The wikipedia page regarding these is pretty thorough.
Where you got a voice signal with frequency components of over 17kHz is beyond me since the human voice is not even capable of producing frequencies close to 10kHz.
Try SCListener Sample Code available here.This predicts the frequency of the sound produced,and helps you to proceed further.
Networked applications often benefit from the ability to estimate the bandwidth between two end-points on the Internet. This may be advantageous not only for rate control purposes, but also in isolating preferred connections where a number of alternatives exist.
Although there are a couple of rigorous treatments of packet-pair probing, a summary of the high-level principles and salient points, covering both the how and the why of the method would be very beneficial; even if only to serve as a bootstrap to more in-depth study.
Any pointers to implementations or usage of packet-pair probing that serve as good examples would also be much appreciated.
Update:
I found some good soft introduction material in a usenix paper derived from work on the nettimer tool - in particular the discussion concerning use of cross-talk filters and sampling windows for increased agility make a lot of sense.
About high-level principles: traditional means of estimating bandwidth send one packet to target and wait for it to return, then send another packet and wait for return, etc... in a sequential way. Then one computes some kind of average/median of the total time of the return trip per k-byte (or any other unit). This information is then used against the theoretical maximum bandwidth (when available) to estimate the available unused bandwidth.
Packet-pair probing send a group of packets to the target at once (i.e., in a parallel way) and wait for them to return. Then a kind of average/median is computed too and evaluated against the maximum theoretical bandwidth.
If you send more packets at once, you are disturbing the system you are trying to measure and you have to take this into account in your estimations, but it goes faster than the one-by-one method and feels more like a snapshot. The bottom question is: what's the trade-off between measurement accuracy and speed of measurement in both cases? Is there any value in this trading?
I have written a program for bandwidth estimation using packet pair method. If anyone wants to have a look at it, will be happy to share it..
EDIT:
Here, is how, I had implemented it in a class assignment,
https://github.com/npbendre/Bandwidth-Estimation-using-Packet-Pair-Probing-Algorithm
Hope this helps!
Of course the best metric would be a happiness of your users.
But what metrics do you know for GUI usability measurements?
For example, one of the common metrics is a average click count to perform action.
What other metrics do you know?
Jakob Nielsen has several articles regarding usability metrics, including one that is entitled, well, Usability Metrics:
The most basic measures are based on the definition of usability as a quality metric:
success rate (whether users can perform the task at all),
the time a task requires,
the error rate, and
users' subjective satisfaction.
I just look at where I want users to go and where (physically) they are going on screen, I do this with data from Google Analytics.
Not strictly usability, but we sometimes measure the ratio of the GUI and the backend code. This is for the managers, to remind them, that while functionality is importaint, the GUI should get a proportional budget for user testing and study too.
check:
http://www.iqcontent.com/blog/2007/05/a-really-simple-metric-for-measuring-user-interfaces/
Here is a simple pre-launch check you
should do on all your web
applications. It only takes about 5
seconds and one screeshot
Q: “What percentage of your interface contains stuff that your customers
want to see?”
10%
25%
100%
If you answer a, or b then you might
do well, but you’ll probably get blown
out of the water once someone decides
to enter the market with option c.