How to set antenna gain in Veins-5.x? - omnet++

I want to implement the formula for Free Space Propagation Model. However, I don't know how to set or get the gain for the transmitter and receiver. Should I just consider it unity? In veins-5.2 the only available antenna parameters are:
*.**.phy80211p.antenna =
*.**.phy80211p.antennaOffsetX =
*.**.phy80211p.antennaOffsetY =
*.**.phy80211p.antennaOffsetZ =
*.**.phy80211p.antennaOffsetYaw =

The parameters of your question only indicate the position offset in terms of the X, Y, and Z location.
The gain is set with the radiation pattern given in the antenna.xml file. The Veins HowTo provides a few information regarding this.

Related

How to specify predictor and predicted variables in Golearn

I want to use Golearn, which is great, as a machine learning library. I have very simple questions, for which I could not find answers, neither if the library doc, nor on Google:
How to specify the predictor and predicted variables in a fit ?
In a linear regression, one can compute Y = aX + b, or X = aY+b.
How does one specify to Golearn which column of the data set is X and which is Y ?

How to set a convergence tolerance to an specific variable using Dymola?

So, I have a model of a tube with pressure loss, where the unknown is the mass flow rate. Normally, and on most models of this problem, the conservation equations are used to calculate the mass flow rate, but such models have lots of convergence issues (because of the blocked flow at the end of the tube which results in an infinite pressure derivative at the end). See figure below for a representation of the problem on the left and the right a graph showing the infinite pressure derivative.
Because of that I'm using a model which is more robust, though it outputs not the mass flow rate but the tube length, which is known. Therefore an iterative loop is needed to determine the mass flow rate. Ok then, I coded a function length that given the tube geometry, mass flow rate and boundary conditions it outputs the calculated tube length and made the equations like so:
parameter Real L;
Real m_flow;
...
equation
L = length(geometry, boundary, m_flow)
It simulates fine, but it takes ages... And it shouldn't because the mass flow rate is rather insensitive to the tube length, e.g. if L=3 I could say that m_flow has converged if the output of length is within L ± 0.1. On the other hand the default convergence tolerance of DASSL in Dymola is 0.0001, which is fine for all other variables, but a major setback to my model here...
That being said, I'd like to know if there's a (hacky) way of setting a specific tolerance L (from annotations or something). I was unable to find any solution online or in Dymola's user manual... So far I managed a workaround by making a second function which uses a Newton-Raphson method to determine the mass flow rate, something like:
function massflowrate
input geometry, boundary, m_flow_start, tolerance;
output m_flow;
protected
Real error, L, dL, dLdm_flow, Delta_m_flow;
algorithm
error = geometry.L;
m_flow = m_flow_start;
while error>tolerance loop
L = length(geometry, boundary, m_flow);
error = abs(boundary.L - L);
dL = length(geometry, boundary, m_flow*1.001);
dLdm_flow = dL/(0.001*m_flow);
Delta_m_flow = (geometry.L - L)/dLdm_flow;
m_flow = m_flow + Delta_m_flow;
end while;
end massflowrate;
And then I use it in the equations section:
parameter Real L;
Real m_flow;
...
equation
m_flow = massflowrate(geometry, boundary, delay(m_flow,10), tolerance)
Nevertheless, this solutions is not without it's problems, the real equations are very non-linear and depending on the boundary conditions the solver reaches a never-ending loop... =/
PS: I'm sorry for the long post and the lack of a MWE, the real equations are very long and with loads of thermodynamics which I believe not to be of any help, be that as it may, if necessary, I'm able to provide the real model.
Is the length-function smooth? To me that it being non-smooth seems like a likely cause for problems, and the suggestions by #Phil might also be good ideas.
However, it should also be possible to do what you want as follows:
Real m_flow(nominal=1e9);
Explanation: The equations are normally solved to a certain tolerance in unknowns - in this case m_flow.
The tolerance for each variable is a relative/absolute tolerance taking into the nominal value, and Dymola does not allow you to set different tolerances for different variables.
Thus the simple way to compute m_flow less accurately is by setting a high nominal value for it, since the error tolerance will be tol*(abs(m_flow)+abs(nominal(m_flow))) or something like that.
The downside is that it may be too inaccurate, e.g. causing additional events, or that the error is so random that the solver is still slowed down.

Integrating multiple raymarching samples

Let's say I'm using raymarching to render a field function. (This on the CPU, not the GPU.) I have an algorithm like this crudely-written pseudocode:
pixelColour = arbitrary;
pixelTransmittance = 1.0;
t = 0;
while (t < max_view_distance) {
point = rayStart + t*rayDirection;
emission, opacity = sampleFieldAt(point);
pixelColour, pixelTransmittance =
integrate(pixelColour, pixelTransmittance, emission, absorption);
t = t * stepFactor;
}
return pixelColour;
The logic is all really simple... but how does integrate() work?
Each sample actually represents a volume in my field, not a point, even though the sample is taken at a point; therefore the effect on the final pixel colour will vary according to the size of the volume.
I don't know how to do this. I've had a look around, but while I've found lots of code which does it (usually on Shadertoy), it all does it differently and I can't find any explanations of why. How does this work, and more importantly, what magic search terms will let me look it up on Google?
It's the Beer-Lambert law, which governs extinction through participating homogenous media. No wonder I was unable to find any keywords which worked.
There's a good writeup here, which tells me almost everything I need to know, although it does rather gloss over the calculation of the phase functions. But at least now I know what to read up on.

SVM training performance

I'm using SVMLib to train a simple SVM over the MNIST dataset. It contains 60.000 training data. However, I have several performance issues: the training seems to be endless (after a few hours, I had to shut it down by hand, because it doesn't respond). My code is very simple, I just call ovrtrain on the dataset without any kernel and any special constants:
function features = readFeatures(fileName)
[fid, msg] = fopen(fileName, 'r', 'ieee-be');
header = fread(fid, 4, "int32" , 0, "ieee-be");
if header(1) ~= 2051
fprintf("Wrong magic number!");
end
M = header(2);
rows = header(3);
columns = header(4);
features = fread(fid, [M, rows*columns], "uint8", 0, "ieee-be");
fclose(fid);
return;
endfunction
function labels = readLabels(fileName)
[fid, msg] = fopen(fileName, 'r', 'ieee-be');
header = fread(fid, 2, "int32" , 0, "ieee-be");
if header(1) ~= 2049
fprintf("Wrong magic number!");
end
M = header(2);
labels = fread(fid, [M, 1], "uint8", 0, "ieee-be");
fclose(fid);
return;
endfunction
labels = readLabels("train-labels.idx1-ubyte");
features = readFeatures("train-images.idx3-ubyte");
model = ovrtrain(labels, features, "-t 0"); % doesn't respond...
My question: is it normal? I'm running it on Ubuntu, a virtual machine. Should I wait longer?
I don't know whether you took your answer or not, but let me tell you what I predict about your situation. 60.000 examples is not a lot for a power trainer like LibSVM. Currently, I am working on a training set of 6000 examples and it takes 3-to-5 seconds to train. However, the parameter selection is important and that is the one probably taking long time. If the number of unique features in your data set is too high, then for any example, there will be lots of zero feature values for non-existing features. If the tool is implementing data scaling on your training set, then most probably those lots of zero feature values will be scaled to a certain non-zero value, leaving you astronomic number of unique and non-zero valued features for each and every example. This is very very complicated for a SVM tool to get in and extract efficient parameter values.
Long story short, if you had enough research on SVM tools and understand what I mean, you either assign parameter values in the training command before executing it or find a way to decrease the number of unique features. If you haven't, go on and download the latest version of LibSVM, read the ReadME files as well as the FAQ from the website of the tool.
If non of these is the case, then sorry for taking your time:) Good luck.
It might be an issue of convergence given the characteristics of your data.
Check the kernel you have as default selection and change it. Also, check the stopping criterion of the package. Additionally, if you are looking for faster implementation, check MSVMpack which is a parallel implementation of SVM.
Finally, feature selection in your case is desired. You can end up with a good feature subset of almost half of what you have. In addition, you need only a portion of data for training e.g. 60~70 % are sufficient.
First of all 60k is huge data for training.Training that much data with linear kernel will take hell of time unless you have a supercomputing. Also you have selected a linear kernel function of degree 1. Its better to use Gaussian or higher degree polynomial kernel (deg 4 used with the same dataset showed a good tranning accuracy). Try to add the LIBSVM options for -c cost -m memory cachesize -e epsilon tolerance of termination criterion (default 0.001). First run 1000 samples with Gaussian/ polynomial of deg 4 and compare the accuracy.

Technique for balancing controller input and output

I have a system where I use RS232 to control a lamp that takes an input given in float representing voltage (in the range 2.5 - 7.5). The control then gives a output in the range 0 to 6000 which is the brightness a sensor picks up.
What I want is to be able to balance the system so that I can specify a brightness value, and the system should balance in on a voltage value that achieves this.
Is there some standard algorithm or technique to find what the voltage input should be in order to get a specific output? I was thinking of an an algorithm which iteratively tries values and from each try it determines some new value which should be better in order to achieve the determined output value. (in my case that is 3000).
The voltage values required tend to vary between different systems and also over the lifespan of the lamp, so this should preferably be done completely automatic.
I am just looking for a name for a technique or algorithm, but pseudo code works just as well. :)
Calibrate the system on initial run by trying all voltages between 2.5 and 7.5 in e.g. 0.1V increments, and record the sensor output.
Given e.g. 3000 as a desired brightness level, pick the voltage that gives the closest brightness then adjust up/down in small increments based on the sensor output until the desired brightness is achieved. From time to time (based on your calibrated values becoming less accurate) recalibrate.
After some more wikipedia browsing I found this:
Control loop feedback mechanism:
previous_error = setpoint - actual_position
integral = 0
start:
error = setpoint - actual_position
integral = integral + (error*dt)
derivative = (error - previous_error)/dt
output = (Kp*error) + (Ki*integral) + (Kd*derivative)
previous_error = error
wait(dt)
goto start
[edit]
By removing the "integral" component and tweaking the weights (Ki and Kd), the loop works perfectly.
I am not at all into physics, but if you can assume that the relationship between voltage and brightness is somewhat close to linear, you can use a standard binary search.
Other than that, this reminds me of the inverted pendulum, which is one of the standard examples for the use of fuzzy logic.

Resources