C++ zbar library - unresolved external symbol - visual-studio

I ran the following code for getting code feature for barcode using zbar library and set the properties for the project. I am getting errors such as unresolved external symbol error (LNK2019). How to resolve this error? The code for my program and the errors are attached below
#include <opencv2/opencv.hpp>
#include <C:/Program Files/ZBar/include/zbar.h>
using namespace cv;
using namespace std;
using namespace zbar;
typedef struct
{
string type;
string data;
vector <Point> location;
} decodedObject;
// Find and decode barcodes and QR codes
void decode(Mat& im, vector<decodedObject>& decodedObjects)
{
// Create zbar scanner
ImageScanner scanner;
// Configure scanner
scanner.set_config(ZBAR_NONE, ZBAR_CFG_ENABLE, 1);
// Convert image to grayscale
Mat imGray;
cvtColor(im, imGray, COLOR_BGR2GRAY);
// Wrap image data in a zbar image
Image image(im.cols, im.rows, "Y800", (uchar*)imGray.data, im.cols * im.rows);
// Scan the image for barcodes and QRCodes
int n = scanner.scan(image);
// Print results
for (Image::SymbolIterator symbol = image.symbol_begin(); symbol != image.symbol_end(); ++symbol)
{
decodedObject obj;
obj.type = symbol->get_type_name();
obj.data = symbol->get_data();
// Print type and data
cout << "Type : " << obj.type << endl;`
cout << "Data : " << obj.data << endl << endl;
// Obtain location
for (int i = 0; i < symbol->get_location_size(); i++)
{
obj.location.push_back(Point(symbol->get_location_x(i), symbol->get_location_y(i)));
}
decodedObjects.push_back(obj);
}
}
// Display barcode and QR code location
void display(Mat& im, vector<decodedObject>& decodedObjects)
{
// Loop over all decoded objects
for (int i = 0; i < decodedObjects.size(); i++)
{
vector<Point> points = decodedObjects[i].location;
vector<Point> hull;
// If the points do not form a quad, find convex hull
if (points.size() > 4)
convexHull(points, hull);
else
hull = points;
// Number of points in the convex hull
int n = hull.size();
for (int j = 0; j < n; j++)
{
line(im, hull[j], hull[(j + 1) % n], Scalar(255, 0, 0), 3);
}
}
// Display results
imshow("Results", im);
waitKey(0);
}
int main(int argc, char* argv[])
{
// Read image
Mat im = imread("zbar-test.jpg");
// Variable for decoded objects
vector<decodedObject> decodedObjects;
// Find and decode barcodes and QR codes
decode(im, decodedObjects);
// Display location
display(im, decodedObjects);
return EXIT_SUCCESS;
}
The errors are as follows,

Your code runs OK! You just need to refer the Zbar library on:
Project Properties -> Linker -> Input -> C:\opencv\Zbar\lib\libzbar64-0.lib (for example)
With kind regards,
PFG

Related

printing a board with lines in a 2D char array

#include <iostream>
using namespace std;
//prints the board with labels for options.
void print(const char board[3][3]) //function prototype
{
for(int Row = 0; Row < 3; ++Row)
{
for(int Col =0; Col < 3; ++Col)
{
cout << board[Row][Col],;
cout << " | ";
}
}
}
int main()
{
int MaxiBoardGrid = 9;
char board[3][3];
board[0][0]='A';
board[0][1]='B';
board[0][2]='C';
board[1][0]='D';
board[1][1]='E';
board[1][2]='F';
board[2][0]='G';
board[2][1]='H';
board[2][2]='I';
print(board);
return 0;
}
DESIRED OUTPUT;
A|B|C|
------
D|E|F
------
G|H|I
I am stuck with having the lines printed and the rows in the format above.
language is C++.
I am new to 2D array and I am trying to use the basic codes I am conversant with to get the board printed. then assign player for my proposed game.
Thanks.

OpenCV perspectiveTransform broken function

Im trying to use perspectiveTransform but I keep getting error. I tried to follow the solution from this thread http://answers.opencv.org/question/18252/opencv-assertion-failed-for-perspective-transform/
_players[i].getCoordinates() is of type Point
_homography_matrix is a 3 x 3 Mat
Mat temp_Mat = Mat::zeros(2, 1, CV_32FC2);
for (int i = 0; i < _players.size(); i++)
{
cout << Mat(_players[i].get_Coordinates()) << endl;
perspectiveTransform(Mat(_players[i].get_Coordinates()), temp_Mat, _homography_matrix);
}
Also, how do I convert temp_Mat into type Point ?
OpenCV Error: Assertion failed (scn + 1 == m.cols) in cv::perspectiveTransform
Basically you just need to correct from
Mat(_players[i].get_Coordinates()) ...
to
Mat2f(_players[i].get_Coordinates()) ...
In the first case you are creating a 2x1, 1 channel float matrix, in the second case (correct) you create a 1x1, 2 channel float matrix.
You also don't need to initialize temp_Mat.
You can also use template Mat_ to better control the types of your Mats. E.g. creating a Mat of type CV_32FC2 is equivalent to create a Mat2f.
This sample code will show you also how to convert back and forth between Mat and Point:
#include <opencv2\opencv.hpp>
#include <vector>
using namespace std;
using namespace cv;
int main()
{
// Some random points
vector<Point2f> pts = {Point2f(1,2), Point2f(5,10)};
// Some random transform matrix
Mat1f m(3,3, float(0.1));
for (int i = 0; i < pts.size(); ++i)
{
cout << "Point: " << pts[i] << endl;
Mat2f dst;
perspectiveTransform(Mat2f(pts[i]), dst, m);
cout << "Dst mat: " << dst << endl;
Point2f p(dst(0));
cout << "Dst point: " << p << endl;
}
return 0;
}

How to avoid detecting image frame when using findContours

How can I avoid detecting the frame of an image when using findContours (OpenCV)? Until I found OpenCV findContours allways finds two contours for every object and implemented that answer, I was not detecting the internal object consistently (object line was broken into several pieces), but now I detect the image frame every time.
The image is of a quad-rotor UAV seen from the bottom; I am using a series of pictures for 'training' object detection. For that, I need to be sure that I can consistently get the UAV object. I guess I could invert the colors, but that seems like a dirty hack.
The images are first the input image just before findContours, and the resulting contours. I have seven test images, and all seven has a frame and the UAV. The hu moments are very similar (as expected).
The code (C++11, and quite messy) for finding the contours/objects and calculating the hu moments:
#include <opencv/cv.h>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
#include <fstream>
#include <string>
using namespace cv;
using namespace std;
#define EROSION_SIZE 1
#define ERODE_CANNY_PREP_ITERATIONS 5
int main() {
Mat image, canny_output, element, padded;
RNG rng(12345);
int numbers[] = {195, 223, 260, 295, 331, 368, 396};
string pre = "/home/alrekr/Pictures/UAS/hu-images/frame_";
string middle = "_threshold";
string post = ".png";
string filename = "";
vector<vector<Point>> contours;
vector<Vec4i> hierarchy;
ofstream fout("/home/alrekr/Pictures/UAS/hu-data/hu.dat");
element = getStructuringElement(MORPH_RECT,
Size(2*EROSION_SIZE + 1, 2*EROSION_SIZE+1),
Point(EROSION_SIZE, EROSION_SIZE));
namedWindow("Window", CV_WINDOW_AUTOSIZE);
for (int i : numbers) {
filename = pre + to_string(i) + middle + post;
image = imread(filename, CV_LOAD_IMAGE_GRAYSCALE);
erode(image, image, element, Point(-1,-1), ERODE_CANNY_PREP_ITERATIONS);
imwrite("/home/alrekr/Pictures/UAS/hu-data/prep_for_canny_" + to_string(i) + ".png", image);
findContours(image, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0));
vector<Moments> mu(contours.size());
if(contours.size() < 1) {
cout << "No contours found" << endl;
} else {
cout << "Contours found: " << contours.size() << endl;
}
vector<Point2f> mc(contours.size());
for(int j = 0; j < (int)contours.size(); j++) {
mc[j] = Point2f(mu[j].m10/mu[j].m00 , mu[j].m01/mu[j].m00);
}
Mat drawing = Mat::zeros(image.size(), CV_8UC3);
for(int j = 0; j < (int)contours.size(); j++) {
Scalar color = Scalar(rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255));
drawContours(drawing, contours, j, color, 2, 8, hierarchy, 0, Point());
imshow("Window", drawing);
waitKey(0);
}
imwrite("/home/alrekr/Pictures/UAS/hu-data/cannied_" + to_string(i) + ".png", drawing);
fout << "Frame " << i << "\n";
for(int j = 0; j < (int)contours.size(); j++) {
mu[j] = moments(contours[j]);
double hu[7];
HuMoments(mu[j], hu);
fout << "Object " << to_string(j) << "\n";
fout << hu[0] << "\n";
fout << hu[1] << "\n";
fout << hu[2] << "\n";
fout << hu[3] << "\n";
fout << hu[4] << "\n";
fout << hu[5] << "\n";
fout << hu[6] << "\n";
}
}
fout.close();
return 0;
}
The function cv::findContours describes the contour of areas consisting of ones. The areas in which you are interested are black, though.
So the solution is simple. Invert the input image before detecting contours:
image = 255 - image;
Below is a code example which I derived from your example above:
#include <opencv2/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
#include <string>
#define EROSION_SIZE 1
#define ERODE_CANNY_PREP_ITERATIONS 5
int main( int argc, char ** argv )
{
// Display the version of the linked OpenCV library.
std::cout << "Using OpenCV " << CV_VERSION_MAJOR << "." << CV_VERSION_MINOR << ".";
std::cout << CV_VERSION_REVISION << CV_VERSION_STATUS << std::endl;
// Load the input file.
std::string filename = std::string( argv[ 1 ] );
cv::Mat image = imread( filename, cv::IMREAD_GRAYSCALE );
// Invert the image so the area of the UAV is filled with 1's. This is necessary since
// cv::findContours describes the boundary of areas consisting of 1's.
image = 255 - image;
// Detect contours.
std::vector< std::vector< cv::Point> > contours;
std::vector< cv::Vec4i > hierarchy;
cv::findContours( image, contours, hierarchy, cv::RETR_TREE, cv::CHAIN_APPROX_SIMPLE );
std::cout << "Contours found: " << contours.size() << std::endl;
// Display and save the results.
cv::RNG rng( 12345 );
cv::Mat contourImage = cv::Mat::zeros( image.size(), CV_8UC3);
for( size_t j = 0; j < contours.size(); j++ )
{
cv::Scalar color( rng.uniform( 0, 255 ), rng.uniform( 0,255 ), rng.uniform( 0, 255 ) );
cv::drawContours( contourImage, contours, j, color, 2, 8, hierarchy, 0, cv::Point() );
}
// cv::imwrite( "contours.png", contourImage );
cv::imshow( "contours", contourImage );
cv::waitKey( 0 );
return 0;
}
The console output is as follows:
$ ./a.out gvlGK.png
Using OpenCV 3.0.0-beta
Contours found: 1
and the resulting contour image is this:
Another solution would be :-
find the bounding Rectangle of the contour
x,y,w,h = cv2.boundingRect(c)
compare the size of the image with the size of the bounding Rectangle for example
cnt_size=w*h
if(abs(cnt_size-img_size<=ERROR_THRESHOLD):
##discard this contour
If you have white background, first inverse it using THRESH_BINARY_INV type and then use contour.
image = imread(filename, CV_LOAD_IMAGE_GRAYSCALE);
threshold(image,image,100,255,THRESH_BINARY_INV);
findContours( image, contours, hierarchy, cv::RETR_TREE, cv::CHAIN_APPROX_SIMPLE );
This will only return the contour that you need.

C++11: How to Get A Multidimensional Array Through vector and to Assign it to auto?

I am a lazy programmer. I want to use C++ vector to create a multidimensional array. For example, this code create a 3x2 2D array:
int nR = 3;
int nC = 2;
vector<vector<double> > array2D(nR);
for(int c = 0; c < nC; c++)
array2D.resize(nC, 0);
However, I am too lazy to
declare array2D's data type: vector<vector<double> >
C++ auto could solve this problem.
However, I am too lazy to
write loop(s) to allocate the space(s) for each object like array2D.
Writing a function could solve this problem.
However, I am too lazy to
write each function for each N-dimensional array.
write nested N-1 loops for allocating spaces.
wirte each function for each data type.
The C++11 variadic template with function recursion could solve this problem.
Is it possible ...?
This is what you want. (Tested on Microsoft Visual C++ 2013 Update 1)
#include <iostream>
#include <vector>
using namespace std;
template<class elemType> inline vector<elemType> getArrayND(int dim) {
// Allocate space and initialize all elements to 0s.
return vector<elemType>(dim, 0);
}
template<class elemType, class... Dims> inline auto getArrayND(
int dim, Dims... resDims
) -> vector<decltype(getArrayND<elemType>(resDims...))> {
// Allocate space for this dimension.
auto parent = vector<decltype(getArrayND<elemType>(resDims...))>(dim);
// Recursive to next dimension.
for (int i = 0; i < dim; i++) {
parent[i] = getArrayND<elemType>(resDims...);
}
return parent;
}
int main() {
auto test3D = getArrayND<double>(2, 3, 4);
auto test4D = getArrayND<double>(2, 3, 4, 2);
test3D[0][0][1] = 3;
test4D[1][2][3][1] = 5;
cout << test3D[0][0][1] << endl;
cout << test4D[1][2][3][1] << endl;
return 0;
}

Get RGB pixels from input image and reconstruct an output image in opencv

I want to load the image in opencv and split the image into channels(RGB) and i want to increase any one of the colors and getting that corresponding output image.is there any easiest way to do this problem?
Well to add any scalar to an RGB image you can use cvAddS(srcImage, scalarToAdd, dstImage).
Here is an example:
int main(int argc, char** argv)
{
// Create a named window with the name of the file.
cvNamedWindow( argv[1], 1 );
// Load the image from the given file name.
IplImage* img = cvLoadImage( argv[1] );
//Make a scalar to add 30 to Blue Color and 20 to Red (BGR format)
CvScalar colorAdd = cvScalar(30.0, 0, 20.0);
cvAddS(img, colorAdd, img);
// Show the image in the named window
cvShowImage( argv[1], img );
// Idle until the user hits the “Esc” key.
while( 1 ) {
if( cvWaitKey( 100 ) == 27 ) break;
}
cvDestroyWindow( argv[1] );
cvReleaseImage( &img );
exit(0);
}
Haven't tested the code, hope it helps.
#karlphillip: Generally a better solution for RGB images - handles any padding at row ends, also parallelizes nicely with OMP !
for (int i=0; i < height;i++)
{
unsigned char *pRow = pRGBImg->ptr(i);
for (int j=0; j < width;j+=bpp)
// For educational puporses, here is how to print each R G B channel:
std::cout << std::dec << "R:" << (int) pRow->imageData[j] <<
" G:" << (int) pRow->imageData[j+1] <<
" B:" << (int) pRow->imageData[j+2] << " ";
}
}
With the OpenCV C++ interface you can simply add a Scalar to an image with the overloaded arithmetic operators.
int main(int argc, const char * argv[]) {
cv::Mat image;
// read an image
if (argc < 2)
return 2;
image = cv::imread(argv[1]);
if (!image.data) {
std::cout << "Image file not found\n";
return 1;
}
cv::Mat image2 = image.clone(); // Make a deep copy of the image
image2 += cv::Scalar(30,0,20); // Add 30 to blue, 20 to red
cv::namedWindow("original");
cv::imshow("original", image);
cv::namedWindow("addcolors");
cv::imshow("addcolors", image2);
cv::waitKey(0);
return 0;
}
Another option is to manually iterate on the pixels of the image and work on the channel that interests you. This will give you the flexibility to manipulate each channel individually or as a group.
The following code uses the C interface of OpenCV:
IplImage* pRGBImg = cvLoadImage("test.png", CV_LOAD_IMAGE_UNCHANGED);
int width = pRGBImg->width;
int height = pRGBImg->height;
int bpp = pRGBImg->nChannels;
for (int i=0; i < width*height*bpp; i+=bpp)
{
// For educational puporses, here is how to print each R G B channel:
std::cout << std::dec << "R:" << (int) pRGBImg->imageData[i] <<
" G:" << (int) pRGBImg->imageData[i+1] <<
" B:" << (int) pRGBImg->imageData[i+2] << " ";
}
However, if you want to add a fixed value to a certain channel you might want to check #Popovici's answer.

Resources