How can i analyse the path-loss in veins, during the communication between two nodes. I looked through the analogue modules files and found that simple path-loss module is used but i don't know how this module could be used to accumulate the path-loss results. Do i have to add separate statistics to extract the path-loss results?
The path-loss is a communication phenomena. If you want to analyze it your have to look at metrics which quantify its effect.
As an analogy, if someone calls your name from distance you might not hear because the signal power (amplitude of the voice) has been attenuated due to the distance.
For example, you can look at recvPower in decider80211p, or higher level statistics which are recorded in mac1609_4
For more information about the path loss in Veins, you can consider this Q&A: Maximum transmission range vs maximum interference distance
Related
In a network with some wireless nodes, after simulation, the Omnet++ generates a scalar file.
I want to calculate throughput, goodput, end-to-end delay time, packets lost count from the generated scalar file.
Is there a tools that calculate them? or script?
If there isn't, what is the best solution?
I search similar question in the stackoverflow site, most questions are unanswered.
Assuming you are using INET, if the application layer emits a packetReceived signal then you can use the dataAge(packetReceived) and throughput(packetReceived) result filters.
This will produce these results in the .sca and .vec files.
For an example of how to use them see: https://github.com/inet-framework/inet/blob/007bc454ec7749e2dea8fcb808429d21074880ad/src/inet/applications/udpapp/UdpSink.ned
Some example applications which will work with this are: IpvxTrafGen, SctpClient/Peer, TcpAppBase (and derivatives), UdpBasicApp/Burst, UdpSink etc.
I'm doing network simulation using Omnet ++ software and castalia. so i want to know how to plot power consumption, RSSI value curves
thanks in advance
This question's answer is dependent on what software you're using exactly. OMNeT++ provides a suite and toolchain for visualization, but as user Thanassis points out below, Catalia has its' own specialized evaluation tools.
For general OMNeT++ simulations
The OMNeT++ tutorial covers this in quite a lot of detail. Basically:
tell OMNeT++ what data you're recording (by editing the .ned files) using the #statistic annotation
modify the simulation to track the data you're interested in, either over simulation time (referred to as vectors) or per simulation (referred to as scalars), basically using emit().
make sure that your simulation (omnetpp.ini) is set up to collect the statistics (by setting e.g. histogram or vector as the type of collected data)
use either the graphical OMNeT++ interface or your own scripts to analyze the output of the simulation (which is, by default, stored in a folder called results wherever you're running the simulation, unless you change that in the omnetpp.ini)
Please also refer to these related questions:
How to collect traffic data and macroscopic statistics in Veins?
Calculating end-to-end delay for SimpleServerApp in Veins-LTE
My problem is the following: I need to classify a data stream coming from an sensor. I have managed to get a baseline using the
median of a window and I subtract the values from that baseline (I want to avoid negative peaks, so I only use the absolute value of the difference).
Now I need to distinguish an event (= something triggered the sensor) from the noise near the baseline:
The problem is that I don't know which method to use.
There are several approaches of which I thought of:
sum up the values in a window, if the sum is above a threshold the class should be EVENT ('Integrate and dump')
sum up the differences of the values in a window and get the mean value (which gives something like the first derivative), if the value is positive and above a threshold set class EVENT, set class NO-EVENT otherwise
combination of both
(unfortunately these approaches have the drawback that I need to guess the threshold values and set the window size)
using SVM that learns from manually classified data (but I don't know how to set up this algorithm properly: which features should I look at, like median/mean of a window?, integral?, first derivative?...)
What would you suggest? Are there better/simpler methods to get this task done?
I know there exist a lot of sophisticated algorithms but I'm confused about what could be the best way - please have a litte patience with a newbie who has no machine learning/DSP background :)
Thank you a lot and best regards.
The key to evaluating your heuristic is to develop a model of the behaviour of the system.
For example, what is the model of the physical process you are monitoring? Do you expect your samples, for example, to be correlated in time?
What is the model for the sensor output? Can it be modelled as, for example, a discretized linear function of the voltage? Is there a noise component? Is the magnitude of the noise known or unknown but constant?
Once you've listed your knowledge of the system that you're monitoring, you can then use that to evaluate and decide upon a good classification system. You may then also get an estimate of its accuracy, which is useful for consumers of the output of your classifier.
Edit:
Given the more detailed description, I'd suggest trying some simple models of behaviour that can be tackled using classical techniques before moving to a generic supervised learning heuristic.
For example, suppose:
The baseline, event threshold and noise magnitude are all known a priori.
The underlying process can be modelled as a Markov chain: it has two states (off and on) and the transition times between them are exponentially distributed.
You could then use a hidden Markov Model approach to determine the most likely underlying state at any given time. Even when the noise parameters and thresholds are unknown, you can use the HMM forward-backward training method to train the parameters (e.g. mean, variance of a Gaussian) associated with the output for each state.
If you know even more about the events, you can get by with simpler approaches: for example, if you knew that the event signal always reached a level above the baseline + noise, and that events were always separated in time by an interval larger than the width of the event itself, you could just do a simple threshold test.
Edit:
The classic intro to HMMs is Rabiner's tutorial (a copy can be found here). Relevant also are these errata.
from your description a correctly parameterized moving average might be sufficient
Try to understand the Sensor and its output. Make a model and do a Simulator that provides mock-data that covers expected data with noise and all that stuff
Get lots of real sensor data recorded
visualize the data and verify your assuptions and model
annotate your sensor data i. e. generate ground truth (your simulator shall do that for the mock data)
from what you learned till now propose one or more algorithms
make a test system that can verify your algorithms against ground truth and do regression against previous runs
implement your proposed algorithms and run them against ground truth
try to understand the false positives and false negatives from the recorded data (and try to adapt your simulator to reproduce them)
adapt your algotithm(s)
some other tips
you may implement hysteresis on thresholds to avoid bouncing
you may implement delays to avoid bouncing
beware of delays if implementing debouncers or low pass filters
you may implement multiple algorithms and voting
for testing relative improvements you may do regression tests on large amounts data not annotated. then you check the flipping detections only to find performance increase/decrease
I'm working on a system that will send telemetry data on machine operation back to a central server for analysis. One of the machine parameters we're measuring is motor current drawn vs time. After an operation is finished we plan to send back an array of currents vs time to the server. A successful operation would have a pattern like a trapezoid, problematic operations would have a pattern completely different, more like a large spike in values. Can anyone recommend a type of neural network that would be good at classifying these 1D vectors of current values into a pass/fail type output?
Thanks,
Fred
Maybe taking the FFT and passing it through a radial basis function neural network will do the trick. It seems like the features you are looking for are periodic features which will be captured by the FFT, and RBF can do the learning.
Many types of neural network might be used to solve this problem, but I imagine that a relatively simple scoring function might work as well and be much easier to implement. If you can identify the likely locations of the beginning and ending of your trapezoid, I suggest trying something like average "absolute difference from a trapezoidal template shape" as a measure of machine performance.
Here's my scenario. Consider a set of events that happen at various places and times - as an example, consider someone high above recording the lightning strikes in a city during a storm. For my purpose, lightnings are instantaneous and can only hit certain locations (such as high buildings). Also imagine each lightning strike has a unique id so one can reference the strike later. There are about 100,000 such locations in this city (as you guess, this is an analogy as my current employer is sensitive about the actual problem).
For phase 1, my input is the set of (strike id, strike time, strike location) tuples. The desired output is the set of the clusters of more than 1 event that hit the same location within a short time. The number of clusters is not known in advance (so k-means is not that useful here). What is being considered as 'short' could be predefined for a given clustering attempt. That is, I can set it to, say, 3 minutes, than run the algorithm; later try with 4 minutes or 10 minutes. Perhaps a nice touch would be for the algorithm to determine a 'strength' of clustering and recommend that for a given input, the most compact clustering is achieved by using a particular value for 'short', but this is not required initially.
For phase 2, I'd like to take into consideration the amplitude of the strike (i.e., a real number) and look for clusters that are both within a short time and with similar amplitudes.
I googled and checked the answers here about data clustering. The information is a bit bewildering (below is the list of links I found useful). AFAIK, k-means and related algorithms would not be useful because they require the number of clusters to be specified apriori. I'm not asking for someone to solve my problem (I like solving it), but some orientation in the large world of data clustering algorithms would be useful in order to save some time. Specifically, what clustering algorithms are appropriate for when the number of clusters is unknown.
Edit: I realized the location is irrelevant, in the sense that although events happen all the time, I only need to cluster them per location. So each location has its own time-series of events that can thus be analyzed independently.
Some technical details:
- as the dataset is not that large, it can fit all in memory.
- parallel processing is a nice to have, but not essential. I only have a 4-core machine and MapReduce and Hadoop would be too much.
- the language I'm mostly familiar with is Java. I haven't yet used R and the learning curve for it would probably be too much for what time I was given. I'll have a look at it anyway in my spare time.
- for the time being, using tools to run the analysis is ok, I don't have to produce just code. I'm mentioning this because probably Weka will be suggested.
- visualization would be useful. As the dataset is large enough so it doesn't fit in memory, the visualization should at least support zooming and panning. And to clarify: I don't need to build a visualization GUI, it's just a nice capability to use for checking the results produced with a tool.
Thank you. Questions that I found useful are: How to find center of clusters of numbers? statistics problem?, Clustering Algorithm for Paper Boys, Java Clustering Library, How to cluster objects (without coordinates), Algorithm for detecting "clusters" of dots
I would suggest you to look into Mean Shift Clustering. The basic idea behind mean shift clustering is to take the data and perform a kernel density estimation, then find the modes in the density estimate, the regions of convergence of data points towards modes defines the clusters.
The nice thing about mean shift clustering is that the number of clusters do not have to be specified ahead of time.
I have not used Weka, so I am not sure if it has mean shift clustering. However if you are using MATLAB, here is a toolbox (KDE toolbox) to do it. Hope that helps.
Couldn't you just use hierarchical clustering with the difference in times of strikes as part of the distance metric?
It is too late, but still I would add it:
In R, there is a package fpc and it has a method pamk() which provides you the clusters. Using pamk(), you do not need to mention the number of clusters intially. It calculates itself the number of clusters in the input data.