I own multiple access points that braoadcast same SSID. When a WiFi enabled device such as a phone connects to one of the WiFi access points, how can I determine the location of that device?
Is it possible at the AP side to find out signal strength that the device gets, and then calculate the distance based on that information? The devices that are connecting to these access points don't run any software component that I own (so that I could query the location from the device).
In fact this is not as easy. You can't use signal strength as a distance equivalent. In fact if you have some obstacle (wall, people...) the signal will not be proportional to the distance.
O
B
+---+ S +---+ +---+
|AP1| T |STA| |AP2|
+---+ A +---+ +---+
C
L
E
In this case, you can have a better signal with AP2 than with AP1.
Nevertheless, you can easily determine if the STA is connected to AP1 or AP2 as each 802.11 frame have:
Source mac address:
Receiver mac address
Transmitter mac address
Destination mac address
+----+ +----+
|STA1| Source Destination |STA2|
+-+--+ +----+
|
| Transmitter Receiver ^
| +---+ +---+ |
+--------> |AP1| |AP2+-----------+
+-+-+ +---+
|
| ^
| |
| |
+------------------+
via ethernet
So if you are the source, you can send a frame (ping...) and check the for the transmitter address. This way you will have the mac of the AP that you are connected.
Maybe you can use ping to check the delay between device and APs. Bigger delay means longer distance.Then you can calculate the location. That's just a suggestion.
Related
The only advantage I can think of using 16-bit instead of 64-bit addressing on a IEEE 802.15.4 network is that 6 bytes are saved in each frame. There might be a small win for memory constrained devices as well (microcontrollers), especially if they need to keep a list of many addresses.
But there are a couple of drawbacks:
A coordinator must be present to deal out short addresses
Big risk of conflicting addresses
A device might be assigned a new address without other nodes knowing
Are there any other advantages of short addressing that I'm missing?
You are correct in your reasoning, it saves 6 bytes which is a non-trivial amount given the packet size limit. This is also done with PanId vs ExtendedPanId addressing.
You are inaccurate about some of your other points though:
The coordinator does not assign short addresses. A device randomly picks one when it joins the network.
Yes, there is a 1/65000 or so chance for a collision. When this happens, BOTH devices pick a new short address and notify the network that there was an address conflict. (In practice I've seen this happen all of twice in 6 years)
This is why the binding mechanism exists. You create a binding using the 64-bit address. When transmission fails to a short address, the 64-bit address can be used to relocate the target node and correct the routing.
The short (16-bit) and simple (8-bit) addressing modes and the PAN ID Compression option allow a considerable saving of bytes in any 802.15.4 frame. You are correct that these savings are a small win for the memory-constrained devices that 802.15.4 is design to work on, however the main goal of these savings are for the effect on the radio usage.
The original design goals for 802.15.4 were along the lines of 10 metre links, 250kbit/s, low-cost, battery operated devices.
The maximum frame length in 802.15.4 is 128 bytes. The "full" addressing modes in 802.15.4 consist of a 16-but PAN ID and a 64-bit Extended Address for both the transmitter and receiver. This amounts to 20 bytes or about 15% of the available bytes in the frame. If these long addresses had to be used all of the time there would be a significant impact on the amount of application data that could be sent in any frame AND on the energy used to operate the radio transceivers in both Tx and Rx.
The 802.15.4 MAC layer defines an association process that can be used to negotiate and use shorter addressing mechanisms. The addressing that is typically used is a single 16-bit PAN ID and two 16-bit Short Ids, which amounts to 6 bytes or about 5% of the available bytes.
On your list of drawbacks:
Yes, a coordinator must hand out short addresses. How the addresses are created and allocated is not specified but the MAC layer does have mechanisms for notifying the layers above it that there are conflicts.
The risk of conflicts is not large as there are 65533 possible address to be handed out and 802.15.4 is only worried about "Layer 2" links (NB: 0xFFFF and 0xFFFE are special values). These addresses are not routable/routing/internetworking addresses (well, not from 802.15.4's perspective).
Yes, I guess a device might get a new address without the other nodes knowing but I have a hunch this question has more to do with ZigBee's addressing than with the 802.15.4 MAC addressing. Unfortunately I do not know much about ZigBee's addressing so I can't comment too much here.
I think it is important for me to point out that 802.15.4 is a layer 1 and layer 2 specification and the ZigBee is layer 3 up, i.e. ZigBee sits on top of 802.15.4.
This table is not 100% accurate, but I find it useful to think of 802.15.4 in this context:
+---------------+------------------+------------+
| Application | HTTP / FTP /Etc | COAP / Etc |
+---------------+------------------+------------+
| Transport | TCP / UDP | |
+---------------+------------------+ ZigBee |
| Network | IP | |
+---------------+------------------+------------+
| Link / MAC | WiFi / Ethernet | 802.15.4 |
| | Radio | Radio |
+---------------+------------------+------------+
PRE-SCRIPTUM:
I have searched over StackOverflow and there is no Q/A explaining all possibilities of tweaking WebRTC to make it more viable for end products.
PROBLEM:
WebRTC has a very nice UX and it is cutting the edge. It should be perfect for mesh calls (3-8 people), but it is not yet. The biggest issue with mesh calls (all participants exchange streams with each other) is resource consumption, especially CPU.
Here are some stats I would like to share:
2.3 GHz Intel Core i5 (2 cores), OSX 10.10.2 (14C109), 4GB RAM, Chrome 40.0.2214.111 (64-bit)
+------------------------------------+----------+----------+
| Condition | CPU | Delta |
+------------------------------------+----------+----------+
| Chrome (idle after getUserMedia) | 11% | 11% |
| Chrome-Chrome | 55% | 44% |
| Chrome-Chrome-Chrome | 74% | 19% |
| Chrome-Chrome-Chrome-Chrome | 102% | 28% |
+------------------------------------+----------+----------+
QUESTION:
I would like to create a table with WebRTC tweaks, which can improve resource consumption and make overall experience better. Are there any other settings I can play with apart from those which are in the table below?
+------------------------------------+--------------+----------------------+
| Tweak | CPU Effect | Affects |
+------------------------------------+--------------+----------------------+
| Lower FPS | Low to high | Video quality lower |
| Lower video bitrate | Low to high | Video quality lower |
| Turn off echo cancellation | Low | Audio quality lower |
| Lower source video resolution | Low to high | Video quality lower |
| Get audio only source | Very high | No video |
| Codecs? Compression? More?.. | | |
+------------------------------------+--------------+----------------------+
P.S.
I would like to leave the same architecture (mesh), so MCU is not the thing I am searching for.
You can change the audio rate and codec(OPUS -> PCMA/U) and you could also reduce the channels. Changing audio will help but video is your main CPU hog.
Firefox does support H.264. Using it could bring significant reductions to the CPU utilization as a ton of different architectures support hardware encoding/decoding of H.264. I am not 100% sure if Firefox will take advantage of that but it is worth a shot.
As for chrome, VP8 is really your only option for video at the moment and your codec agnostic changes(resolution, bitrate, etc.) are really the only way to address the cycles utilized there.
You may also be able to force Chrome to use a lower quality stream by negotiating the maximum bandwith in your SDP. Though, in the past, this has not worked with Firefox.
I have a doubt about endian-ness concept.please don't refer me to wikipedia, i've already read it.
Endian-ness, Isn't it just the 2 ways that the hardware cabling(between memory, and registers, through data bus) has been implemented in a system?
In my understanding, below picture is a little endian implementation(follow horizontal line from a memory address (e.g 4000) and then vertical line to reach to the low/high part of the register please)
As you see little memory addresses have been physically connected to low-part of 4-byte register.
I think that it does not related at all to READ and WRITE instructions in any language(e.g. LDR in ARM).
1-byte memory address:
- 4000 value:XX ------------------|
- 4001 value:XX ---------------| |
- 4002 value:XX ------------| | |
- 4003 value:XX ---------| | | |
| | | |
general-purpose register:XX XX XX XX
Yes and no. (I can't see your diagram, but I think I understand what you're asking). The way data lines are physically connected in the hardware can determine/control whether the representation in memory is treated as big or little endian. However, there is more to it than this; little endian is a means of representation, so for instance data stored on magnetic storage (in a file) might be coded using little endian representation or big endian representation and obviously at this level the hardware is not important.
Furthermore, some 8 bit microcontrollers can perform 16 bit operations, which are performed at the hardware level using two separate memory accesses. They can therefore use either little or big endian representation independent of bus design and ALU connection.
I read the Datasheet for an Intel Xeon Processor and saw the following:
The Integrated Memory Controller (IMC) supports DDR3 protocols with four
independent 64-bit memory channels with 8 bits of ECC for each channel (total of
72-bits) and supports 1 to 3 DIMMs per channel depending on the type of memory
installed.
I need to know what this exactly means from a programmers view.
The documentation on this seems to be rather sparse and I don't have someone from Intel at hand to ask ;)
Can this memory controller execute 4 loads of data simultaneously from non-adjacent memory regions (and request each data from up to 3 memory DIMMs)? I.e. 4x64 Bits, striped from up to 3 DIMMs, e.g:
| X | _ | X | _ | X | _ | X |
(X is loaded data, _ an arbitrarily large region of unloaded data)
Can this IMC execute 1 load which will load up to 1x256 Bits from a contiguous memory region.
| X | X | X | X | _ | _ | _ | _ |
This seems to be implementation specific, depending on compiler, OS and memory controller. The standard is available at: http://www.jedec.org/standards-documents/docs/jesd-79-3d . It seems that if your controller is fully compliant there are specific bits that can be set to indicate interleaved or non-interleaved mode. See page 24,25 and 143 of the DDR3 Spec, but even in the spec details are light.
For the i7/i5/i3 series specifically, and likely all newer Intel chips the memory is interleaved as in your first example. For these newer chips and presumably a compiler that supports it, yes one Asm/C/C++ level call to load something large enough to be interleaved/striped would initiate the required amount of independent hardware channel level loads to each channel of memory.
In the Triple channel section in of the Multichannel memory page on wikipedia there is a small list of CPUs that do this, likely it is incomplete: http://en.wikipedia.org/wiki/Multi-channel_memory_architecture
I have 2 modules using the same clock but in different files, when I sample signal that come from module A in module B , in the Waveform simulation it doesn't get samples after one clock cycle like it should , it shows that is samples in the same rising edge(behavior that fit to asynchronous instasiation) .
I have been told it happens because Active-HDL consider it to 2 differnet clock because of the different component and thats why it sample in the same rising edge(because of the delta time that the signal goes from A to B).
how can i define that Active-HDL will understand they both use the same clock in same area ?
This has nothing to do with your simulator. I assume that you're doing something like this:
+----------+ +----------+
| |-- clk --->| |
clk --->| Module A | | Module B |
| |-- data -->| |
+----------+ +----------+
where you should be doing something like that:
+----------+ +----------+
| | | |
clk -+->| Module A |-- data -->| Module B |
| | | | |
| +----------+ | |
| | |
+-----------------------> | |
+----------+
The problem with the first configuration is that your clock signal gets delayed by one or more delta cycles when it goes through module A. It may thus toggle in the same, or in a later delta cycle than the data signal. This is something that you will not see in the simulator's waveform view (unless it has an option to expand delta cycles) but you can have a look at the list view to see exactly what happens in delta-time.
The handling of clock within your chip and within your simulation environment requires the same type of care you take in doing a board design. In particular clock skew must always be smaller than the smallest propagation delay.
In an RTL simulation environment, all of the delays on signals are measured in terms of delta cycles (the default delay for any signal assignment when you are not using after). Going through a port does not incur any delta cycles. However, every assignment to a signal causes a delta cycle delay.
One method to insure successful data transfer is to make sure all clocks in the design are delta cycle aligned when they are used. The simplest way to make sure this happens is to make sure that none of the blocks do an assignment to the clock they use. Hence, do not do any of the following:
LocalClk <= PortClk ; -- each assignment causes a delta cycle of clock skew
GatedClk <= Clk and Enable ; -- clock gates are bad. See alternative below
Generally we rarely use clock gates - and then we only do it when it is an approved part of our methodology (usually not for FPGAs). In place of using gated clocks in your design, use data path enables:
process (Clk)
begin
if rising_edge(Clk) then
if Enable = '1' then
Q <= D ;
end if ;
end if ;
end process ;
There are other methodologies to sort this out.