Hi I'm creating a xbee network with 1 coordinator and 20 end nodes transmitting data 8 times per second. (Currently i just made one of them talk to the coordinator).
I would like to know how many data packages the coordinator will be able to receive as I'll be transmitting in a high data rate. (20 end index x 8 times per second its 160 data packages per second).
Is that feasible ? Will I face any problems ? What should I be worried with ? For that data rate is there any other protocol I could use?
Thanks
I would say that it isn't feasible. The radio data rate of 802.15.4 networks is 250kbps. Once you're sending that many data packets per second, you'll be getting collisions and retransmissions. It might work if the packets are very small, but you'll still have a lot of overhead for packet headers. If this is a mesh network, you'll be using bandwidth for packet retransmission when a node can't communicate directly with the coordinator.
Is there a reason you need that data frequency? Can the end devices aggregate their data and send a single packet once/second with 8 data samples? Zigbee and 802.15.4 were designed for low data rates and low power.
If you're going to try this out, you'll want to configure the coordinator for at least 230kbps to keep up with the data flow. Do a proof of concept with a single end device (and configure it as a router, since you don't need "sleeping" capability of end devices), and then consider testing with 5 devices sending 4 times as much data (32 packets/second) to see if the coordinator can even keep up with that data flow.
Related
I am working on the Hyperledger Application that can store sensor data from IoT.
Using HLF v1.4 with Raft. Each IoT device will provide JSON data at fixed intervals which gets stored in Hyperledger. I have worked with HLF v1.3 which doesn't scale very well.
With v1.4, I am planning to start with 2 organization setup with 5 peers for each organization.
But the limiting factor seems to be, as the number of blocks increase by adding new transactions and querying the network takes a longer time.
What are the steps that can be taken to scale the HLF with v1.4 or onwards.
What type of Server specs should be used for good performance, like RAM, CPUs when selecting a server e.g EC2
You can change your block size. If you increase the size of your block, then the number of blocks will get reduced. For better query and Invoke functionality you can limit the data storing into Blockchain. Yes, Computation speed also matters in Blockchain, if you have good speed, tps may vary. Try with instance types like t3 medium or more than that like t3 large.
I just created a table in HBase and filled it with data. From the 7 regionservers it appears the data was written to region servers 6 and 7.
But I dont understand why the requests per second is zero for servers 6 and 7?
Read request count and Write request count are the total number of read and write requests seen by a particular region server since its restart. These numbers are kept in memory only for performance reasons and exposed via JMX and regionserver load API that the HBase UI uses to expose them. You could fetch them yourself using the API (or JMX) and export to a DB for persistence.
Request per second is the rate of total requests (read+write) that the regionserver in question is seeing right now. The rate is calculated based on the delta of the number of requests seen by that regionserver during a period divided by the length of the period. This particular detail (and this period) can differ based on HBase versions. In HBase 2.x, it is controlled by hbase.regionserver.metrics.period; while in previous versions there was no such setting and the period was fixed (if I remember correctly).
To answer your question, the comparison of rate of total requests and the count of total requests is not apples-to-apples. The rate only reflects current traffic while the count reflects lifetime number of requests since region server's restart. If you really think about it, it does not make sense to have a rate of requests for lifetime, because any real use case is concerned with the current rate only.
If you bulk-filled the tables via put(List<Put>), there would only have been a very small number of requests, as records are sent in batches.
Im running a 4-core Amazon EC2 instance(m3.xlarge) with 200.000 concurrent connections with no ressouce problems(each core at 10-20%, memory at 2/14GB). Anyway if i emit a message to all the user connected first on a cpu-core gets it within milliseconds but the last connected user gets it with a delay of 1-3 seconds and each CPU core goes up to 100% for 1-2 seconds. I noticed this problem even at "only" 50k concurrent users(12.5k per core).
How to reduce the delay?
I tried changing redis-adapter to mongo-adapter with no difference.
Im using this code to get sticky sessions on multiple cpu cores:
https://github.com/elad/node-cluster-socket.io
The test was very simple: The clients do just connect and do nothing more. The server only listens for a message and emits to all.
EDIT: I tested single-core without any cluster/adapter logic with 50k clients and the same result.
I published the server, single-core-server, benchmark and html-client in one package: https://github.com/MickL/socket-io-benchmark-kit
OK, let's break this down a bit. 200,000 users on four cores. If perfectly distributed, that's 50,000 users per core. So, if sending a message to a given user takes .1ms each of CPU time, that would take 50,000 * .1ms = 5 seconds to send them all.
If you see CPU utilization go to 100% during this, then a bottleneck probably is CPU and maybe you need more cores on the problem. But, there may be other bottlenecks too such as network bandwidth, network adapters or the redis process. So, one thing to immediately determine is whether your end-to-end time is directly proportional to the number of clusters/CPUs you have? If you drop to 2 cores, does the end-to-end time double? If you go to 8, does it drop in half? If yes for both, that's good news because that means you probably are only running into CPU bottleneck at the moment, not other bottlenecks. If that's the case, then you need to figure out how to make 200,000 emits across multiple clusters more efficient by examining node-cluster-socket.io code and finding ways to optimize your specific situation.
The most optimal the code could be would be to have every CPU do all it's housekeeping to gather exactly what it needs to send to all 50,000 users and then very quickly each CPU does a tight loop sending 50,000 network packets one right after the other. I can't really tell from the redis adapter code whether this is what happens or not.
A much worst case would be where some process gets all 200,000 socket IDs and then goes in a loop to send to each socket ID where in that loop, it has to lookup on redis which server contains that connection and then send a message to that server telling it to send to that socket. That would be a ton less efficient than instructing each server to just send a message to all it's own connected users.
It would be worth trying to figure out (by studying code) where in this spectrum, the socket.io + redis combination is.
Oh, and if you're using an SSL connection for each socket, you are also devoting some CPU to crypto on every send operation. There are ways to offload the SSL processing from your regular CPU (using additional hardware).
I develop simulator of mobile nodes, each one with transmission range of 100m for example. The communication between the nodes are wireless and TDMA based.
I have notice that if 2 nodes (not in the same range) broadcast message on the same time, it's cause to a problem.
How can i limit the distance of nessage that is sent from a node ? such that i can broadcast 2 or more messages on the same time, and just the nodes in the range of the sending node will hear the message ?
The code that processes the reception of the packet should calculate the distance from the sender and drop the packet if it's out of range.
A little less accurate solution: before sending the packets the broadcasting node should check the distance to the potential receiving node and not send the packet if it is out of range. This is a bit faster (as it generates less packets) and more clear (you will see the broadcast animation only for the packets that actually delivered)
A much easier solution: Use INET Framework, which already has the necessary implementation. You would only need to implement a MAC module that handles the TDMA protocol.
I am trying to reduce packets manipulation to its minimum in order to improve efficiency of a specific program i am working on but i am struggling with the time it takes to send through a udp socket using sendto/recvfrom. I am using 2 very basic processes (applications), one is sending, the other one receiving.
I am willing to understand how linux internally works when using these function calls...
Here are my observations:
when sending packets at:
10Kbps, the time it takes for the messages to go from one application to the other is about 28us
400Kbps, the time it takes for the messages to go from one application to the other is about 25us
4Mbps, the time it takes for the messages to go from one application to the other is about 20us
40Mbps, the time it takes for the messages to go from one application to the other is about 18us
When using different CPUs, time is obviously different and consistent with those observations. There must be some sort of setting that enables some queue readings to be done faster depending on the traffic flow on a socket... how can that be controlled?
When using a node as a forwarding node only, going in and out takes about 8us when using 400Kbps flow, i want to converge to this value as much as i can. 25us is not acceptable and deemed to slow (it is obvious that this is way less than the delay between each packet anyway... but the point is to be able to eventually have a greater deal of packets to be processed, hence, this time needs to be shortened!). Is there anything faster than sendto/recvfrom (must use 2 different applications (processes), i know i cannot use a monolitic block, thus i need info to be sent on a socket)?