I'm trying to code a program for a class that simulates a router and so far I have the basics set up ("router" can send and receive packets through an emulated server to other "routers" connected to the server). Each packet contains only the distance vector for that router. When a router receives a packet it is supposed to update it's own distance vector accordingly using the Bellman-Ford algorithm. The problem I'm having is that I am finding myself unable to implement the actual algorithm without cheating and using an adjacency matrix.
For example, say I have 3 routers connected as follows:
A ---1--- B ---2--- C
That is, A and B are connected with a link cost of 1, and B and C are connected with a link cost of 2. So when the routers are all started, they will send a packet to each of their directly connected neighbors containing their distance vector info. So A would send router B (0, 1, INF), B would send A and C (1, 0, 2) and C would send B (INF, 2, 0) where INF means the 2 routers are not directly connected.
So let's look at router A receiving a packet from router B. To calculate the minimum costs to each other router using the Bellman-Ford algorithm is as follows.
Mincost(a,b) = min((cost(a,b) + distance(b,b)),(cost(a,c) + distance(c,b))
Mincost(a,c) = min((cost(a,b) + distance(b,c)),(cost(a,c) + distance(c,c))
So the problem I am running into is that I cannot for the life of me figure out how to implement an algorithm that will calculate the minimum path for a router to every other router. It's easy enough to make one if you know exactly how many routers there are going to be but how would you do it when the number of routers can be arbitrarily big?
You never can make sure of the shortest paths with DVMRP.
You dont have the global view of the network for one thing. Each router is operating on as much as it sees and what it sees is restricted-- can be misleading. Look up the looping problem of DVMRP. DVMRP can never have the full network info to process on.
It isn`t scalable either. Its performance is increasingly lower
as the number or routers increase. This is because of the
distance-vector update message flooded around and current accuracy of these messages.
It is one of the earliest multicast protocols. Its performance
matches that of RIP of the unicast scale.
Related
This problem I recently came back to after putting it on a backburner for a while - and that's the one of trying to create a program to calculate the flow rates of some kind of resource through a network of pipes from resource sources to resource sinks, each of which can only pass so much resource per unit of time. This is, of course, a classic problem called the "network flow problem", and in particular the typical aim is to find a flow pattern that maximizes the flow going from the sources to the sinks. And I made a program that uses a common algorithm called the Ford-Fulkerson Max-Flow Method to do this, but I found that while this algorithm certainly does a nice job at finding a flow solution, it doesn't necessarily do a good job at making one that is particularly "natural" in terms of the flow pattern.
That is to say, consider a graph like the one below.
------------- SINK 1
0 / 8 | 0 / 5
SOURCE ---------X
| 0 / 5
------------- SINK 2
where the numbers represent the current flow rate on that particular edge or "pipe", here in "units" per second, versus the maximum flow the pipe can support, the "X" is a junction node, and the other labels should be self-explanatory.
When we solve this using F-F (which requires us to temporarily add an "aggregate sink" node that ties the two sinks on the right together), we find the max flow rate is indeed 8 U/s, which should be obvious just from simple inspection for such a simple graph. However, the flow pattern it gives may look something like either
------------- SINK 1
8 / 8 | 5 / 5
SOURCE ---------X
| 3 / 5
------------- SINK 2
or
------------- SINK 1
8 / 8 | 3 / 5
SOURCE ---------X
| 5 / 5
------------- SINK 2
depending on the order on which it encounters the edges during the depth-first walk used in the calculation. Trouble is, not only is that behavior itself not ideal, that flow doesn't "feel natural" in a certain sense. Intuitively, if we were imagining pushing a fluid, we'd expect 4 U/s of flow to go to sink 1 and another 4 to go to sink 2 by symmetry. Indeed, if we actually shrink the capacity of the edge leading out of the source to 5, the Ford-Fulkerson algorithm will starve one sink entirely, and that is also a behavior I'd like to avoid - if there's not enough flow to keep everybody as happy as they'd like to be, then at least try to distribute it as evenly as possible. In this case, that'd mean that if the max flow is, say, as here, 80% of the flow needed to fully satiate all the sinks, then 80% should go to each sink, unless there's a constriction somewhere in the graph that prevents sending even that much to that sink, in which case excess flow should back up and go to the other sinks while that one still gets the maximum it can get.
So my question is, what sort of algorithms would have either this behavior or a behavior similar to it? Or, to put it another way, if F-F is a good tool to just find a maximum flow, what is a good tool for tailoring the pattern of that maximum flow to some "desirable" form like this?
One simple solution I thought of is to just repeatedly apply F-F, only instead of routing from the source to the fictitious aggregate sink, apply it from the source to each individual sink, thus giving the max flow that is capable of making it through the constrictions, then work out from that how much each sink can actually get fed based on its demand and the whole-graph max flow. Trouble is, that means running the algorithm as many times as there are sinks, so the Big-O goes up, perhaps too much. Is there a more efficient way to achieve this?
I started (since about one week) using veins (4.4) under omnet++ (5.0).
My current task is to let vehicles adjust their transmission range according to a specific context. I did read a lot of asked questions like these ones (and in other topics/forums):
Dynamical transmission range in the ieee802.11p module
Vehicles Receive Beacon Messages outside RSU Range
How coverage distance and interference distance are affected by each other
Maximum transmission range vs maximum interference distance
Reduce the coverage area between vehicles
how to set the transmission range of a node under Veins 2.0?
My Question:
How to -really- change the transmission range of just some nodes?
From the links above, I knew that the term "transmission range", technically, is related to the received power, noise,sensitivity threshold, etc. which defines the probability of reception.
Since I am new to veins (and omnet++ as well), I did few tests and I concluded the following:
"TraCIMobility" module can adjust the nodes' parameters (for each vehicle, there is an instance) such as the ID, speed, etc.
I could, also, instantiate the "Mac1609_4" (for each vehicle) and changed some of its parameters like the "txPower" during simulation run-time but it had no effect on the real communication range.
I could not instantiate (because it was global) the "connection manager" module which was the only responsible of (and does override) the effective communication range. this module can be configured in the ".ini" file but I want different transmission powers and most importantly "can be changed during run-time".
The formula to calculate the transmission range is in the attached links, I got it, but it must be a way to define or change these parameters in one of the layers (even if it is in the phy layer, i.e., something like the attached signal strength...)
Again, maybe there is some wrong ideas in what I have said, I just want to know what/how to change this transmission range.
Best regards,
You were right to increase the mac1609_4.txPower parameter to have a node send with more power (hence, the signal being decodable further away). Note, however, that (for Veins 4.4) you will also need to increase connectionManager.pMax then, as this value is used to determine the maximum distance (away from a transmitting simulation module) that a receiving simulation module will be informed about an ongoing transmission. Any receiving simulation module further away will not be influenced by the transmission (in the sense of it being a candidate for decoding, but also in the sense of it contributing to interference).
Also note that transmissions on an (otherwise) perfectly idle channel will reach much further than transmissions on a typically-loaded channel. If you want to obtain a good measurement of how far a transmission reaches, have some nodes create interference (by transmitting broadcasts of their own), then look at how the Frame Delivery Rate (FDR) drops as distance between sender and receiver increases.
Finally, note that both 1) the noise floor and 2) the minimum power level necessary for the simulation module of a receiver to attempt decoding a frame need to be calibrated to the WLAN card you want to simulate. The values chosen in the Veins 4.4 tutorial example are very useful for demonstrating the concepts of Veins, whereas the values of more recent versions of Veins come closer to what you would expect from a "typical" WLAN card used in some of the more recent field tests. See the paper Bastian Bloessl and Aisling O'Driscoll, "A Case for Good Defaults: Pitfalls in VANET Physical Layer Simulations," Proceedings of IFIP Wireless Days Conference 2019, Manchester, UK, April 2019 for a more detailed discussion of these parameters.
I am just giving my opinion in case someone was already in my situation:
In veins (the old version that I am using is 4.4), the "connection manager" is the responsible for evaluating a "potential" exchange of packets, thus, its transmission power is almost always set to the upper-bound.
I was been confused after I changed the vehicles "Mac1609_4" transmission power and "graphically", the connection manager was still showing me that the packets are received by some far nodes which in fact was not the case, it was just evaluating whether it is properly received or not (via the formula discussed in the links above).
Thus: changing the "TxPower" of each vehicle had really an effect beside graphically (the messages were not mounted to the upper layers).
In sum, to make a transmission range aware scheme, this is what must be done:
In the sender node (vehicle), and similarly to the pointer "traci" which deals with the mobility features, a pointer to the "mac1609" must be created and pointed to it as follows:
In "tracidemo11p.h" add ->
#include "veins/modules/mac/ieee80211p/Mac1609_4.h"//added
#include "veins/base/utils/FindModule.h"//added
and as a protected variable in the class of "tracidemo11p" in the same ".h" file ->
Mac1609_4* mac;//added
In "tracidemo11p.cc" add ->
mac = FindModule<Mac1609_4*>::findSubModule(getParentModule());
now you can manipulate "mac" as in "traci", the appropriate methods are in the "modules/mac/ieee80211p/Mac1609_4.cc & .h"
for our work, the method will be:
mac->setTxPower(10);//for example
This will have an impact on the simulation in real-time for each node instance.
It may had described it with basic concepts because I am new to omnet-veins, these was done in less than one week (and will be provided for new users as well).
I hope it will be helpful (and correct)
I have a set of water meters for water consumers drawn up as geojson and visualized with ol3. For each consumer house i have their usage of water for the given year, and also the water pipe system is given as linestrings, with metadata for the diameter of each pipe section.
What is the minimum required information I need to be able to visualize/calculate the amount of water that passed each pipe in total of the year when the pipes have inner loops/circles.
is there a library that makes it easy to do the calculations in javascript.
Naive approach, start from each house and move to the first pipe junction and add the used mater measurement for the house as water out of the junction and continue until the water plant is reached. This works if there was no loops within the pipe system.
This sounds more like a physics or civil engineering problem than a programming one.
But as best I can tell, you would need time series data for sources and sinks.
Consider this simple network:
Say, A is a source and B and D are sinks/outlets.
If the flow out of B is given, the flow in |CB| would be dependent on the flow out of D.
So e.g. if B and D were always open at the same time, the total volume that has passed |CB| might be close to 0. Conversely, if B and D were never open at the same time the number might be equal to the volume that flowed through |AB|.
If you can obtain time series data, so you have concurrent values of flow through D and B, I would think there would exist a standard way of determining the flow through |CB|.
Wikipedia's Pipe Network Analysis article mentions one such method: The Hardy Cross method, which:
"assumes that the flow going in and out of the system is known and that the pipe length, diameter, roughness and other key characteristics are also known or can be assumed".
If time series data are not an option, I would pretend it was always average (which might not be so bad given a large network, like in your image) and then do the same thing.
You can use the Ford-Fulkerson algorithm to find the maximum flow in a network. To use this algorithm, you need to represent your network as a graph with nodes that represent your houses and edges to represent your pipes.
You can first simplify the network by consolidating demands on "dead-ends". Next you'll need pressure data at the 3 feeds into this network, which I see as the top feed from the 90 (mm?), centre feed at the 63 and bottom feed near to the 50. These 3 clusters are linked by a 63mm running down, which have the consolidated demand and the pressure readings at the feed would be sufficient to give the flowrate across the inner clusters.
I have to determining the minimum communication range based on the transmission power to send a ping message from an host to another one wirelessly through a simple wall in OMNeT++.
I'm using the DielectricObstacleLoss model.
My .ini file is:
[Config Network]
network = Network
# application parameters
*.source.numPingApps = 1
*.source.pingApp[0].destAddr = "destination"
*.source.wlan[0].radio.displayCommunicationRange = true
# radio medium parameters
*.radioMedium.obstacleLossType = "DielectricObstacleLoss"
I read the INET manual at the link: https://omnetpp.org/doc/inet/api-current/inet-manual-draft.pdf and in the Path Loss Models section (page 45) is written that computing the traveled distance based on the power loss factor, the signal frequency and the propagation speed is useful for determining the maximum communication range based on the transmission power.
However, I haven't seen anything about trasmitter power in FreeSpacePathLoss.cc and TwoRayGroundReflection.cc codes.
So, can I use the FreeSpacePathLoss or TwoRayGroundReflection models to solve my problem, or there is another solution ?
Source: Google Interview Question
Given a large network of computers, each keeping log files of visited urls, find the top ten most visited URLs.
Have many large <string (url) -> int (visits)> maps.
Calculate < string (url) -> int (sum of visits among all distributed maps), and get the top ten in the combined map.
Main constraint: The maps are too large to transmit over the network. Also can't use MapReduce directly.
I have now come across quite a few questions of this type, where processiong needs to be done over large Distributed systems. I cant think or find a suitable answer.
All I could think of is brute force, which in some or other way, violates the given constraint.
It says you can't use map-reduce directly which is a hint the author of the question wants you to think how map reduce works, so we will just mimic the actions of map-reduce:
pre-processing: let R be the number of servers in cluster, give each
server unique id from 0,1,2,...,R-1
(map) For each (string,id) - send the tuple to the server which has the id hash(string) % R.
(reduce) Once step 2 is done (simple control communication), produce the (string,count) of the top 10 strings per server. Note that the tuples where those sent in step2 to this particular server.
(map) Each server will send all his top 10 to 1 server (let it be server 0). It should be fine, there are only 10*R of those records.
(reduce) Server 0 will yield the top 10 across the network.
Notes:
The problem with the algorithm, like most big-data algorithms that
don't use frameworks is handling failing servers. MapReduce takes
care of it for you.
The above algorithm can be translated to a 2 phases map-reduce algorithm pretty straight forward.
In the worst case any algorithm, which does not require transmitting the whole frequency table, is going to fail. We can create a trivial case where the global top-10s are all at the bottom of every individual machines list.
If we assume that the frequency of URIs follow Zipf's law, we can come up with effecive solutions. One such solution follows.
Each machine sends top-K elements. K depends solely on the bandwidth available. One master machine aggregates the frequencies and finds the 10th maximum frequency value "V10" (note that this is a lower limit. Since the global top-10 may not be in top-K of every machine, the sum is incomplete).
In the next step every machine sends a list of URIs whose frequency is V10/M (where M is the number of machines). The union of all such is sent back to every machine. Each machines, in turn, sends back the frequency for this particular list. A master aggregates this list into top-10 list.