I wanted to use libsvm in visual studio 2010 just for classifying my test sample and nothing more ..
I've worked with libsvm using the documentation provided by its official site ...
So I used these steps sequentially
1). svm-scale -l 0 -s range train.txt> train.scale
2). svm-scale -r range test.txt> test.scale
3). grid.py -svm-train "MYSVM_TRAIN_PATH" -gnuplot "MY_GNUPLOT_PATH" train.scale
4). svm-train -c 32 -g 0.05 -b 1 train.scale train.model
5). svm-predict test.scale train.model test.out
and it worked pretty well , but the problem is that I don't know how to do these steps in visual studio ... I just loaded my train.model (step 4) from above, and did not repeat the training procedure in VS10 .... here it's my code :
void main(){
svm_model *Model;
Model = svm_load_model("train.model");//loaded from svm-train (step4 above)
svm_node x[feature_size];
(Some internal Process for obtaining new feature vector for testing)
double result = svm_predict(Model,x);
std::cout<<"result is"<<result<<std::endl;
return 0}
but this does not result as python code , in python I get 98% precision for my test data but in here it's less than 20%!!!! it's somehow aweful ...
I also used OPENCV for training my data and testing my samples (Using ml.h)
but in OPENCV ,I got 70 % accuracy. and it's still more than 20% reduction from my real result !!!!
I think the problem is in the scaling .. because in both svm.h and OPENCV I didn't find any function for scaling my train and test data .....
Your usage of the command line tools looks ok. If you don't scale your test data the same way as your training data then predict will fail as you have discovered.
Just get the source for libsvm from http://www.csie.ntu.edu.tw/~cjlin/libsvm/ and incorporate the scaling restore logic in svm-scale.c into your code.
To see where the scaling paramters are read in, search for:
if(restore_filename)
The actual scaling is done in a function called output(). It would obviously be straight forward to return a value instead of printing the result.
BTW the libsvm version in opencv is rather old (so I avoid it).
Related
I’m new to TinyFPGA, so I need a little help!
I’m working on a Tiny FPGA project for sensors and actuators where each tinyFPGA provides an 8 bit digital sensor input, and a 4 actuators output with different modes of operation (on/off, PWM, and pulses) - they are serially interconnected in a ring using the WS2811 pixel "protocol, and intercepted by a ESP32.
I have successfully built a pretty decent test bench for system simulation which successfully verifies 3 interconnected instances of the design at RTL level (takes 4 hours to complete with my brand new RYZEN 7 machine:-).
Next I want to do is to do post-routing simulation to verify the timing - and here I get stuck. I’m using Lattice Diamond and the “build in” ModelSim.
I want all the testbench logic to be RTL simulated while the actual FPGA design instances to be post-routing/time simulated.
The .mdo script for modelsim generated by Lattice Diamond looks like this:
if {![file exists “C:/Users/jonas/OneDrive/Projects/ModelRailway/GenericJMRIdecoder/hardware/Satelites_CRC_2/timing/timing.mpf”]} {
project new “C:/Users/jonas/OneDrive/Projects/ModelRailway/GenericJMRIdecoder/hardware/Satelites_CRC_2/timing” timing
project addfile “C:/Users/jonas/OneDrive/Projects/ModelRailway/GenericJMRIdecoder/hardware/Satelites_CRC_2/impl1/genericIOSatelite_impl1_vo.vo”
project addfile “C:/Users/jonas/OneDrive/Projects/ModelRailway/GenericJMRIdecoder/hardware/Satelites_CRC_2/genericIOSatelite_TB.v”
vlib work
vdel -lib work -all
vlib work
vlog +incdir+C:/Users/jonas/OneDrive/Projects/ModelRailway/GenericJMRIdecoder/hardware/Satelites_CRC_2/impl1 -work work “C:/Users/jonas/OneDrive/Projects/ModelRailway/GenericJMRIdecoder/hardware/Satelites_CRC_2/impl1/genericIOSatelite_impl1_vo.vo”
vlog +incdir+C:/Users/jonas/OneDrive/Projects/ModelRailway/GenericJMRIdecoder/hardware/Satelites_CRC_2 -work work “C:/Users/jonas/OneDrive/Projects/ModelRailway/GenericJMRIdecoder/hardware/Satelites_CRC_2/genericIOSatelite_TB.v”
} else {
project open “C:/Users/jonas/OneDrive/Projects/ModelRailway/GenericJMRIdecoder/hardware/Satelites_CRC_2/timing/timing”
project compileoutofdate
}
vsim -L work -L pmi_work -L ovi_machxo2 +transport_path_delays +transport_int_delays genericIOSatelite_TB -sdfmax /genericIOSatelite_TB/DUT0=C:/Users/jonas/OneDrive/Projects/ModelRailway/GenericJMRIdecoder/hardware/Satelites_CRC_2/impl1/genericIOSatelite_impl1_vo.sdf
view wave
add wave /*
run 1000ns
Where “genericIOSatelite_impl1_vo.vo” is my routed and placed FPGA design, “genericIOSatelite_TB.v” is my testbench, “genericIOSatelite_impl1_vo.sdf” is the timing database for my FPGA design and “/genericIOSatelite_TB/DUT0” is one out of three testbed instantiations of the FPGA design (eventually I would want all three simulated with timing, but one problem at the time).
Now I get the following errors:
…
Loading instances from C:/Users/jonas/OneDrive/Projects/ModelRailway/GenericJMRIdecoder/hardware/Satelites_CRC_2/impl1/genericIOSatelite_impl1_vo.sdf
** Error (suppressible): (vsim-SDF-3250) C:/Users/jonas/OneDrive/Projects/ModelRailway/GenericJMRIdecoder/hardware/Satelites_CRC_2/impl1/genericIOSatelite_impl1_vo.sdf(7071): Failed to find INSTANCE ‘SLICE_303’.
** Error (suppressible): (vsim-SDF-3250) C:/Users/jonas/OneDrive/Projects/ModelRailway/GenericJMRIdecoder/hardware/Satelites_CRC_2/impl1/genericIOSatelite_impl1_vo.sdf(7082): Failed to find INSTANCE ‘SLICE_304’.
And 100’ds of more errors like this…
But when I look at the first error: “Failed to find INSTANCE ‘SLICE_303’” I don’t understand the issue, I can clearly see the ‘SLICE_303’ instance both in “genericIOSatelite_impl1_vo.sdf” and in “genericIOSatelite_impl1_vo.vo”:
“genericIOSatelite_impl1_vo.sdf”:
.
.
.
(CELL
(CELLTYPE “SLICE_303”)
(INSTANCE SLICE_303)
(DELAY
(ABSOLUTE
(IOPATH B0 F1 (635:710:786)(635:710:786))
(IOPATH A0 F1 (635:710:786)(635:710:786))
(IOPATH FCI F1 (459:514:569)(459:514:569))
)
)
)
.
.
.
“genericIOSatelite_impl1_vo.vo”:
.
.
.
SLICE_303 SLICE_303( .B0(control_7_adj_1162), .A0(cnt_9_adj_1170),
.FCI(n4958), .F1(n312));
.
.
.
I would very much like to get an advise on what I’m doing wrong here, using the inbuilt OSCH with 133 MHZ Freq, and with 7ns cycle time I believe it would be nice with a reassuring post-routing/placement simulation # worst timing.
Best regards/Jonas
It appears to be a bug in ModelSim as suggested in the following article: https://www.intel.com/content/www/us/en/support/programmable/articles/000084538.html
-sdfnoerror -sdfnowarn seems to fix the problem - but not very assuring to just squelch the issues :-(
i made a console app with dart that calculates 2 numbers and prints them
the size came about 5 MB !!
Download link (Windows only) https://drive.google.com/open?id=1sxlvlSZUdxewzFNiAv_bXwaUui2gs2yA
Here is the code
import 'dart:io';
int inputf1;
int inputf2;
int inputf3;
void main() {
stdout.writeln('Type The First Number');
var input1 = stdin.readLineSync();
stdout.writeln('You typed: $input1 as the first number');
sleep( Duration(seconds: 1));
stdout.writeln('\nType The Second Number');
var input2 = stdin.readLineSync();
stdout.writeln('You typed: $input2 as the second number');
sleep( Duration(seconds: 1));
inputf1 = int.parse(input1);
inputf2 = int.parse(input2);
inputf3 = inputf1 + inputf2;
print('\nfinal answer is : $inputf3');
sleep( Duration(seconds: 10));
}
The reason for the big executable is because the dart2native compiler is not really made to make a executable you can directly run on your machine from scratch. Instead, it package the dartaotruntime executable together with your AOT compiled Dart program.
The dartaotruntime contains all the Dart runtime libraries and dart2native does not remove anything from the dartaotruntime (also difficult since it is a binary) so you will get the whole runtime even if you only adds two numbers.
But it is not that bad since it is an one-cost penalty for every program. So if you make a very big program, the dartaotruntime are still only include once.
However, if you are deploying many small programs in a single package I will recommend you add the -k aot parameter to dart2native so it instead of an executable will generate an .aot file which you then can run with dartaotruntime <program.aot>.
This will make your deployment a bit more complicated but you will just need to provide the dartaotruntime binary together with you multiple .aot files.
I have compiled your program to both .exe and .aot on Dart for Windows 64 bit. version 2.8.2 so you can see the size difference:
Again, -k aot will not save you any disk space if you are only going to deploy a single executable. But it can save a lot if your project contains many programs.
It should also be noted that the .aot file is platform dependent like the .exe file would be. And you should use the same version of dartaotruntime which has been used to compile the file.
It is because natively dart app are made by incorporate the render engine... I know that is right for all the flutter app which are based on dart.
Look at this too https://medium.com/#rajesh.muthyala/app-size-in-flutter-5e56c464dea1
Recently, i started learning about MPI programming and I have tried to program it on both Linux and Windows OS. I do not have any problem running the MPI application on Linux, however, i stumbled upon expression must have a constant value error on Visual Studio
For example, i'm trying to get the world_size via the MPI_Comm_size(MPI_COMM_WORLD, &world_size); and create an array based on the world_size (for example)
Code Sample :
#include <mpi.h>
int world_size;
MPI_Comm_size(MPI_COMM_WORLD, &world_size);
int database[world_size]; //error occured here
However, when i'm running it on Linux, it is working perfectly fine as i'm able to execute the code while stating the number of processes i wish to have. Am i missing out anything? I followed this particular youtube link that taught me how to install MS-MPI on my Visual Studio 2015.
Any help would be greatly appreciated.
Automatic array sizing using non const values actually works with gcc (https://gcc.gnu.org/onlinedocs/gcc/Variable-Length.html). However, it's considered a bad idea because (as you've just experienced) your code won't be portable anymore. You just need to change your code to create an array using new. You might want to generate an error to make sure your code is portable: Disable variable-length automatic arrays in gcc
Windows 7 64 bits
GNU Fortran (GCC) 4.7.0 20111220 (experimental) --> The MinGW version installed with Anaconda3/Miniconda3 64 bits.
Hi all,
I'm trying to compile some Fortran code to be used from Python using F2Py. The full project is Solcore, in case anyone is interested. In Linux and MacOS everything works fine, the problem comes with Windows. After some effort I've nailed down the problem to the quadruple precision variables of my Fortran code, which are not being treated properly.
A minimum example that works perfectly well in Linux/MacOS but not in Windows is:
program foo
real*16 q, q2
q = 20
q2 = q+q
print*, q, q2
end program foo
In Linux/MacOS this prints, as expected:
20.0000000000000000000000000000000000 40.0000000000000000000000000000000000
However, in Windows I get:
2.00000000000000000000000000000000000E+0001 1.68105157155604675313133890866087630E-4932
Keeping aside the scientific notation, clearly this is not what I expected. The same result appear any time I try to do an operation with quadruple precision variables and I cannot figure out way.
This is not the same error already pointed out with quadruple precision variables in Fortran and the MinGW version included in Anaconda.
Any suggestion will be more than welcome. Please, keep in mind that, ultimately I need to make this work with F2Py, and MinGW included in Anaconda is the only way I have found in the end to make it work after reading many instructions and tutorials. Therefore, I would prefer to stick to it, if possible.
Many thanks,
Diego
My code doesn't depend on sm level. I can build it with sm10, If I want. But when I tried to build it with 1.3 instead of 2.0, as I did it before, I got x1.25 performance with no code changes!
sm20 -> 35ms
sm13 -> 25ms
After that gorgeous results, I tried to box/unbox every option in project settings->CUDA settings->all :) I guess, I found the stuff, which made that awesome speed:
If I use sm13 with "no fast math generation" (further fm - fast
math), I have 25ms
If I use sm13 with fm, I have 25ms
sm20 without fm = 35ms
sm20 with fm = 25ms (that is the same result)
Why is this so? Maybe sm13 forces using hardware maths, but sm20 not? Or it is only coincidence, and the latter sm level have lower performance, refer to lower sm level programs?
In addition to compiling in release mode, as pointed out by #Robert Crovella, you should also consider that when you target sm_13 the compiler is able to simplify some of the floating point maths. sm_20 and later supports precise division, precise square root, and denormals by default.
You can try disabling these features with the command line options -ftz=true -prec-div=false -prec-sqrt=false. See the best practices guide for more information.