I would just like to know if the value of "RecievedBroadcasts" for each of the node in Veins also consists of the packets lost or does it just give the number of successful packet receptions. That is if I want to calculate the packet loss ratio, then would it be TotalLostPackets/RecievedBroadcasts or TotalLostPackets/(RecievedBroadcasts + TotalLostPackets).
Thanks for your help
Your best bet to find out exactly how statistics are logged is to look into the source code of Mac1609_4.
You will find that the ReceivedBroadcasts scalar logs the value of variable statsReceivedBroadcasts which is increased via a method called from handleLowerMsg, so only when the Mac layer successfully decoded data.
Related
I am trying to drive a BLDC motor using FOC algorithm. As shown in the figure, I have implemented an RPM controller on top of current control loop of FOC, where the output of RPM PI controller is directly fed to the Iqset of FOC block and Idset is kept at 0.
figure
Problem: I want Iqset to vary form 0 to 60A but this value is low, even at the stalling load Iqset reaches to only 4 to 5A. I have tried different values of Kp and Ki for tuning the RPM controller but unable to achieve the desired result.
I don't know where i am wrong, any help would highly be appreciated.
Further experimental Info: I have tried setting the Iqset at 0.25A just to eliminate the RPM controller, in that case motor runs at full speed with no problem. That brings my attention to the RPM controller but i don't know where i am wrong.
Thanks.
I want to integrate fingerprint sensor in my project. For the instance I have shortlisted R307, which has capacity of 1000 fingerprints.But as project requirement is more then 1000 prints,so I will going to store print inside the host.
The procedure I understand by reading the datasheet for achieving project requirements is :
I will register the fingerprint by "GenImg".
I will download the template by "upchr"
Now whenever a fingerprint come I will follow the step 1 and step 2.
Then start some sort of matching algorithm that will match the recently downloaded template file with
the template file stored in database.
So below are the points for which I want your thoughts
Is the procedure I have written above is correct and optimized ?
Is matching algorithm is straight forward like just comparing or it is some sort of tricky ? How can
I implement that.Please suggest if some sort of library already exist.
The sensor stores the image in 256 * 288 pixels and if I take this file to host
at maximum data rate it takes ~5(256 * 288*8/115200) seconds. which seems very
large.
Thanks
Abhishek
PS: I just mentioned "HOST" from which I going to connect sensor, it can be Arduino/Pi or any other compute device depends on how much computing require for this task, I will select.
You most probably figured it out yourself. But for anyone stumbling here in future.
You're correct for the most part.
You will take finger image (GenImg)
You will then generate a character file (Img2Tz) at BufferID: 1
You'll repeat the above 2 steps again, but this time store the character file in BufferID: 2
You're now supposed to generate a template file by combining those 2 character files (RegModel).
The device combines them for you, and stores the template in both the character buffers
As a last step; you need to store this template in your storage (Store)
For searching the finger: You'll take finger image once, generate a character file in BufferID : 1 and search the library (Search). This performs a linear search and returns the finger id along with confidence score.
There's also another method (GR_Identify); does all of the above automatically.
The question about optimization isn't applicable here, you're using a 3rd party device and you have to follow the working instructions whether it's optimized or not.
The sensor stores the image in 256 * 288 pixels and if I take this file to host at maximum data rate it takes ~5(256 * 288*8/115200) seconds. which seems very large.
I don't really get what you mean by this, but the template file ( that you intend to upload to your host ) is 512 bytes, I don't think it should take much time.
If you want an overview of how this system is implemented; Adafruit's Library is a good reference.
I'm trying to read data from the Yost Labs 3-Space Sensor Nano into LabVIEW via an NI MyRIO (1900). I was able to set up a sequence that communicates with the sensor through SPI. However, every time I run the program, it just spits out a single value of 255.
I think understand that I need to include something that allows all the bytes to be read. I just don't know how to go about it.
As an example, I'm trying to read the gyros (0x26) which have a return length of 12 and is a vector (float x3).
Here is my labview code
and here is the manual for the sensor. The commands I'm using are on pages 29-33. In the image, 0x2B is 'read temperature'.
Any help would be greatly appreciated! Thanks :)
Edit: i had messed up the wiring so now the output jumps between ~35 to 255. I'm still having trouble getting all 3 gyro values from the SPI read.
Quoting from Joe Friedrichsen in his comment:
The express block that resets the sensor is not guaranteed to precede the loop because there is no data flow between them. The LabVIEW runtime can see two independent and parallel groups and may choose to execute them simultaneously (which on the wire might mean reset comes between loop commands) or in "reverse" order. Add a wire from reset block to create a terminal on the loop.
Here's a picture of the fix.
You may wish to consider stringing the error wire through your program and wiring it to the stop terminal of the While Loop. Currently, your loop will keep running even if there's a fault in your hardware. Using the error wire would eliminate the need for the flat sequence structure.
Greeting All,
I have two questions regarding OMNET++ output results
1- I have a simulation that uses AODV routing protocol in VANET network, but
when I record pcap for this simulation, it's shown as corrupted or damage even when I spill it to multi pcap file. How can I solve that? is the Inet pcap support AODV?
2-Is there a way to write this output (in below pic) to a text file or exported as excel format (I highlighted with red color).
I need all the information (event #, time, Relevant hop, name, info). I can copy it but it takes a time when I have around 10000 events?
Thanks in advance
If I am not wrong, you should be able to record the information you want with the statistics methods provided by OMNeT++. You should refer to cOutVector for real-time data recording. Statistical Collection are well explained in Part. 5 of the OMNeT++ TicToc Tutorial 1.
Once you have recorded your data, you can export them from the Browse Data interface (select the vectors you previously recorded in the browse data tab, right click and select Export Data).
(note: this is related to a question I posted before
H2O (open source) for K-mean clustering)
I am using K-Means for our data set of about 100 features (some of them are timestamps)
(1) I checked the “OUTPUT - CLUSTER MEANS” section and the timestamp filed is with the value like “1.4144556086883196e+22”. Our timestamp file is about data in year 2018 and the year 2018 Unix time is like “1541092918000”. Hence, it cannot be that big number “1.4144556086883196e+22”. My understand of the numbers in “OUTPUT - CLUSTER MEANS” section should be close to the raw data (before standarization). Right ?
(2) About standardization, can you use this example https://github.com/h2oai/h2o-3/blob/master/h2o-genmodel/src/test/resources/hex/genmodel/algos/kmeans/model.ini#L21-L27 and tell me how the input data is converted to standardized value? Say, I have a raw vector of value ( a,b,c,d, 1.8 ) , I only keep last element and omit others. How can I know if it’s close to center 2 below in this example. Can you show me how H2O convert the raw data using standardize_means, standardize_mults and standardize_modes. I am sure H2O has a way to compute standardized value from the model output, but I cannot find the place and the formula.
center_2 = [2.0, 0.0, -0.5466317772145349, 0.04096506994984166, 2.1628815416218337]
Thanks.
1) I'm not sure where you are seeing a timestamp in Flow or if you mean your dataset contains a timestamp that H2O-3 has converted. Either way it sounds like you may have encountered a bug. The timestamps you see in H2O-3 are milliseconds since the Unix epoch, so you have to divide by 1000 before using a unix time converter (for example you could use https://currentmillis.com/). But again given that the number is so large, I'm leaning towards a bug - any code you can provide to make it reproducible would be great.
1a) When you check standardize in flow in addition to “OUTPUT - CLUSTER MEANS” (which is not standardized) you will see "OUTPUT - STANDARDIZED CLUSTER MEANS" so the non-standardize output should reflect the unit of your input.
2) Standardization in H2O-3 is described here (which says: "standardizes numeric columns to have zero mean and unit variance. "). The link you provided points to a model for testing that has been saved as MOJO and I'm not sure it makes sense to use as an example. But in general the way standardization works for h2o-3 is as standardization is defined.