boost::geometry svg_mapper, the polygon is not fully saved? - boost

I have a simple example where a graph of polygon is created and saved to an svg file. However, the polygon is partially saved only. So how can I resize the graph?
#include <string>
#include <fstream>
#include <boost/geometry.hpp>
#include <boost/geometry/geometries/point_xy.hpp>
#include <boost/geometry/geometries/polygon.hpp>
#include <boost/geometry/geometries/adapted/c_array.hpp>
// Using the boost namespace
using namespace boost::geometry;
// Register the C array points for boost
BOOST_GEOMETRY_REGISTER_C_ARRAY_CS(cs::cartesian)
template <typename Geometry>
void create_svg(std::string const& file_path, Geometry const& geometry, std::string
const& style) {
using PointType = typename point_type<Geometry>::type;
std::ofstream svg_file(file_path.c_str());
svg_mapper<PointType> mapper(svg_file, 2000, 2000);
mapper.add(geometry);
mapper.map(geometry, style);
}
int main() {
model::polygon<model::d2::point_xy<double>, false> polygon;
double points[][2] = {{0., 0.}, {2., 1.}, {3., 3.}, {-0.5, 0.5}, {0., 0.}};
append(polygon, points);
std::string style{"fill-rule:nonzero;fill-opacity:0.5;fill:yellow;stroke:black;stroke-width:2;"};
create_svg("image.svg", polygon, style);
return 0;
}
The output is like:

It seems that somehow the scale factor/bounding box get miscalculated.
Using
bg::svg_mapper<PointType> mapper(svg_file, 400, 400);
works fine. If all coords get scaled at (larger than) integral grid, all is fine.
I tried to isolate any contibuting factor (integral coordinate type, clockwise orientation polygons, SameScale = true template argument for the svg_mapper) but it all seems unrelated. Ubsan did not flag anything. I would report this at the library. Meanwhile, try smaller SVG dimensions to "hide" the issue:
Live On Coliru
#include <fstream>
#include <iostream>
#include <boost/geometry.hpp>
#include <boost/geometry/geometries/point_xy.hpp>
#include <boost/geometry/geometries/polygon.hpp>
// Using the boost namespace
namespace bg = boost::geometry;
template <typename Geometry>
void create_svg(std::string const& file_path, Geometry const& geometry,
std::string const& style)
{
using PointType = typename bg::point_type<Geometry>::type;
std::ofstream svg_file(file_path);
bg::svg_mapper<PointType/*, true*/> mapper(svg_file, 400, 400);
mapper.add(geometry);
mapper.map(geometry, style);
}
template <typename T> void check(T& geo) {
for (std::string reason; !bg::is_valid(geo, reason); bg::correct(geo)) {
std::cout << "Correcting: " << reason << "\n";
}
}
int main()
{
bg::model::polygon<bg::model::d2::point_xy<double>> polygon{
{{0, 0}, {-.5, .5}, {3, 3}, {2, 1}, {0, 0}}};
check(polygon);
std::string style{"fill-rule:nonzero;fill-opacity:0.5;fill:yellow;stroke:"
"black;stroke-width:2;"};
create_svg("image.svg", polygon, style);
}
Rendering:
BREAKTHROUGH
I just noticed that for me Chrome renders the "weird" SVG as
I thought is was strange that is is rendered differently from your question.
I just thought to check with Inkscape, and lo and behold, it comes out "normal":
The x/y rulers seem to indicate correct scaling. So my best theory is that it's a browser-specific limitation that doesn't render "very large" SVG scales well?

Related

How to plot sine wave?

I am new to C++ programming and I would like to plot a sine/cosine/square wave but I cannot find any resources to help me with it.
My goal is to produce any wave, and then perform a fourier transform of that wave and produce the resultant wave.
This code should work for you. Just make sure you run it on an IDE that has graphics.h. In many new IDEs graphics.h doesn't come by default and you have to add it first.
#include <iostream>
#include <conio.h>
#include <graphics.h>
#include <math.h>
using namespace std;
int main(){
initwindow(800,600);
int x,y;
line(0,500,getmaxx(),500); //to draw co-ordinate axes
line(500,0,500,getmaxy());
float pi = 3.14;
for(int i = -360; i < 360 ; i++){
x = (int)500+i;
y = (int)500 - sin(i*pi/100)*25;
putpixel(x,y,WHITE); //to plot points on the graph
}
getch(); //to see the resultant graph
closegraph();
return 0;
}

Putting image into a Window in x11

I have a QR code in .JPG format. I load it using OpenCV 3.4.4. Now, I create a new X11 window using XCreateSimpleWindow(). Then, I will resize the QR image to that of this new window.
Next, I want to put this resized QR code into the window. I tried using XPutImage(), but without any success, probably because I don't know the usage.
For using XPutImage(), I first took the image of the X11 window using XGetImage(); then obtained the pixel values of the QR image, then assigned that to the pixel value of the image obtained through XGetImage.
Once I had this XImage, I tried putting it to the window using XPutImage. But, it is still showing a black window.
There is no error in the terminal, but result is not as desired.
Any solution to this problem? Like, how to change the background of the window (X11) w.r.t a sample image, and using XPutImage()?
The code goes like this...
// Written by Ch. Tronche (http://tronche.lri.fr:8000/)
// Copyright by the author. This is unmaintained, no-warranty free software.
// Please use freely. It is appreciated (but by no means mandatory) to
// acknowledge the author's contribution. Thank you.
// Started on Thu Jun 26 23:29:03 1997
//
// Xlib tutorial: 2nd program
// Make a window appear on the screen and draw a line inside.
// If you don't understand this program, go to
// http://tronche.lri.fr:8000/gui/x/xlib-tutorial/2nd-program-anatomy.html
//
// compilation:
// g++ -o go qrinX11.cpp `pkg-config --cflags --libs opencv` -lX11
//
#include <opencv2/opencv.hpp>
#include "opencv2/opencv.hpp" // FOR OpenCV
#include <opencv2/core.hpp> // Basic OpenCV structures (cv::Mat)
#include <opencv2/videoio.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>
#include <bits/stdc++.h>
#include <X11/Xlib.h> // Every Xlib program must include this
#include <assert.h> // I include this to test return values the lazy way
#include <unistd.h> // So we got the profile for 10 seconds
#include <X11/Xutil.h>
#include <X11/keysym.h>
#include <X11/Xlib.h> // Every Xlib program must include this
#include <X11/Xlib.h>
#include <X11/Xatom.h>
#include <X11/extensions/Xcomposite.h>
#include <X11/extensions/Xfixes.h>
#include <X11/extensions/shape.h>
#define NIL (0) // A name for the void pointer
using namespace cv;
using namespace std;
int main()
{
XGCValues gr_values;
//GC gc;
XColor color, dummy;
Display *dpy = XOpenDisplay(NIL);
//assert(dpy);
//int screen = DefaultScreen(dpy);
// Get some colors
int blackColor = BlackPixel(dpy, DefaultScreen(dpy));
int whiteColor = WhitePixel(dpy, DefaultScreen(dpy));
// Create the window
Window w = XCreateSimpleWindow(dpy, DefaultRootWindow(dpy), 0, 0,
200, 100, 0, whiteColor, blackColor);
// We want to get MapNotify events
XSelectInput(dpy, w, StructureNotifyMask);
XMapWindow(dpy, w);
// Wait for the MapNotify event
for(;;) {
XEvent e;
XNextEvent(dpy, &e);
if (e.type == MapNotify)
break;
}
Window focal = w;
XWindowAttributes gwa;
XGetWindowAttributes(dpy, w, &gwa);
int wd1 = gwa.width;
int ht1 = gwa.height;
XImage *image = XGetImage(dpy, w, 0, 0 , wd1, ht1, AllPlanes, ZPixmap);
unsigned long rm = image->red_mask;
unsigned long gm = image->green_mask;
unsigned long bm = image->blue_mask;
Mat img(ht1, wd1, CV_8UC3); // OpenCV Mat object is initilaized
Mat scrap = imread("qr.jpg");//(wid, ht, CV_8UC3);
resize(scrap, img, img.size(), CV_INTER_AREA);
for (int x = 0; x < wd1; x++)
for (int y = 0; y < ht1 ; y++)
{
unsigned long pixel = XGetPixel(image,x,y);
unsigned char blue = pixel & bm; // Applying the red/blue/green mask to obtain the indiv channel values
unsigned char green = (pixel & gm) >> 8;
unsigned char red = (pixel & rm) >> 16;
Vec3b color = img.at<Vec3b>(Point(x,y)); // Store RGB values in the OpenCV image
//color[0] = blue;
//color[1] = green;
//color[2] = red;
//img.at<Vec3b>(Point(x,y)) = color;
pixel = color[0];//&color[1]&color[2];
}
namedWindow("QR", CV_WINDOW_NORMAL);
imshow("QR", img);
cout << "herererere\n";
GC gc = XCreateGC(dpy, w, 0, NIL);
XPutImage(dpy, w, gc, image, 0, 0, wd1, ht1, wd1, ht1);
waitKey(0);
//sleep(3);
return 0;
}
Alright, solved it on my own. There was a silly mistake at changing the pixel value and updating it to the actual image and then putting it to the background of the window.
First use XPutPixel(), then use XPutImage()
Here is the final and correct method:
// compilation:
// g++ -o go qrinX11.cpp `pkg-config --cflags --libs opencv` -lX11
//
#include <opencv2/opencv.hpp>
#include "opencv2/opencv.hpp" // FOR OpenCV
#include <opencv2/core.hpp> // Basic OpenCV structures (cv::Mat)
#include <opencv2/videoio.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>
#include <bits/stdc++.h>
#include <X11/Xlib.h> // Every Xlib program must include this
#include <assert.h> // I include this to test return values the lazy way
#include <unistd.h> // So we got the profile for 10 seconds
#include <X11/Xutil.h>
#include <X11/keysym.h>
#include <X11/Xlib.h> // Every Xlib program must include this
#include <X11/Xlib.h>
#include <X11/Xatom.h>
#include <X11/extensions/Xcomposite.h>
#include <X11/extensions/Xfixes.h>
#include <X11/extensions/shape.h>
#define NIL (0) // A name for the void pointer
using namespace cv;
using namespace std;
int main()
{
XGCValues gr_values;
//GC gc;
XColor color, dummy;
Display *dpy = XOpenDisplay(NIL);
//assert(dpy);
//int screen = DefaultScreen(dpy);
// Get some colors
int blackColor = BlackPixel(dpy, DefaultScreen(dpy));
int whiteColor = WhitePixel(dpy, DefaultScreen(dpy));
// Create the window
Window w = XCreateSimpleWindow(dpy, DefaultRootWindow(dpy), 0, 0,
200, 100, 0, whiteColor, blackColor);
// We want to get MapNotify events
XSelectInput(dpy, w, StructureNotifyMask);
XMapWindow(dpy, w);
// Wait for the MapNotify event
for(;;) {
XEvent e;
XNextEvent(dpy, &e);
if (e.type == MapNotify)
break;
}
Window focal = w;
XWindowAttributes gwa;
XGetWindowAttributes(dpy, w, &gwa);
int wd1 = gwa.width;
int ht1 = gwa.height;
XImage *image = XGetImage(dpy, w, 0, 0 , wd1, ht1, AllPlanes, ZPixmap);
unsigned long rm = image->red_mask;
unsigned long gm = image->green_mask;
unsigned long bm = image->blue_mask;
Mat img(ht1, wd1, CV_8UC3); // OpenCV Mat object is initilaized
Mat scrap = imread("qr.jpg");//(wid, ht, CV_8UC3);
resize(scrap, img, img.size(), CV_INTER_AREA);
for (int x = 0; x < wd1; x++)
for (int y = 0; y < ht1 ; y++)
{
unsigned long pixel = XGetPixel(image,x,y);
Vec3b color = img.at<Vec3b>(Point(x,y));
pixel = 65536 * color[2] + 256 * color[1] + color[0];
XPutPixel(image, x, y, pixel);
}
namedWindow("QR", CV_WINDOW_NORMAL);
imshow("QR", img);
GC gc = XCreateGC(dpy, w, 0, NIL);
XPutImage(dpy, w, gc, image, 0, 0, 0, 0, wd1, ht1);
waitKey(0);
return 0;
}
Simplicity is key, and improves performance (in this case):
//..
// Mat img(ht1, wd1, CV_8UC3); // OpenCV Mat object is initilaized
cv::Mat img(ht1, wd1, CV_8UC4, image->data); // initilaize with existing mem
Mat scrap = imread("qr.jpg");//(wid, ht, CV_8UC3);
cv::cvtColor(scrap,scrap,cv::COLOR_BGR2BGRA);
resize(scrap, img, img.size(), cv::INTER_AREA);
// .. and we can skip the for loops
namedWindow("QR", CV_WINDOW_NORMAL);
imshow("QR", img);
// .. etc

Boost Geometry Matrix Transformations on Polygons

Are there any examples of matrix transformations on polygons (cartesian), using Boost Geometry? I am defining the matrix with simple std::vectors.
Also, I could only find 1 example of matrix_transformers using ublas but it's way too convoluted for a simple matrix transformation. If this is the only way though, I'll stick with it, but it would be great to have other options, ad do this with std::vector instead of ublas::matrix.
Here's my solution for anyone who might be interested. Boost geometry actually added a strategy called matrix_transformer that relies on Boost's qvm::mat for matrix transformations. There's not that many examples out there, so here's my code:
#include <boost/geometry.hpp>
#include <boost/geometry/geometries/point_xy.hpp>
#include <boost/geometry/geometries/polygon.hpp>
using namespace boost::geometry::strategy::transform;
typedef boost::geometry::model::d2::point_xy<double> point_2f;
typedef boost::geometry::model::polygon<point_2f> polygon_2f;
int main() {
polygon_2f pol;
boost::geometry::read_wkt("POLYGON((10 10,10 27,24 22,22 10,10 10))", pol);
polygon_2f polTrans;
// Set the rotation angle (in radians)
double angleDeg = 45;
double angleRad = angleDeg * 3.14159 / 180.0;
vector<vector<double> > mat = {{cos(angleRad), sin(angleRad), 0}, {-sin(angleRad), cos(angleRad), 0}, {0, 0, 1}};
// Create the matrix_trasformer for a simple rotation matrix
matrix_transformer<double, 2, 2> rotation(mat[0][0], mat[0][1], mat[0][2], mat[1][0], mat[1][1], mat[1][2], mat[2][0], mat[2][1], mat[2][2]);
// Apply the matrix_transformer
boost::geometry::transform(pol, polTrans, rotation);
// Create svg file to show results
std::ofstream svg("transformationExample.svg");
boost::geometry::svg_mapper<point_2f> mapper(svg, 400, 400);
mapper.add(pol);
mapper.map(pol, "fill-opacity:0.5;fill:rgb(153,204,0);stroke:rgb(153,204,0);stroke-width:2");
mapper.add(polTrans);
mapper.map(polTrans, "fill-opacity:0.5;fill:rgb(153,204,255);stroke:rgb(153,204,255);stroke-width:2");
return 0;
}
And here's my result, where the green polygon is the original and the blue polygon is transformed (remember that the rotation was about the origin):

how to create a masked image in opencv

Here is my code (working after taking inputs from zindarod)
#include <stdio.h>
#include "opencv2/core/core.hpp"
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/nonfree/nonfree.hpp"
using namespace cv;
static void help()
{
printf("\nThis program demonstrates using features2d detector, descriptor extractor and simple matcher\n"
"Using the sift desriptor:\n"
"\n"
"Usage:\n matcher_simple <image1> <image2>\n");
}
int main(int argc, char** argv)
{
if(argc != 3)
{
help();
return -1;
}
Mat img1 = imread(argv[1], CV_LOAD_IMAGE_GRAYSCALE);
Mat img2 = imread(argv[2], CV_LOAD_IMAGE_GRAYSCALE);
Rect regionone(151, 115, 42, 27);
Rect regiontwo(141, 105, 52, 37);
Mat dst,mask;
Rect rect(151, 115, 42, 27);
mask = Mat::zeros(img1.size(),CV_8UC1);
mask(Rect(151,115,42,27)) = 1;
img1.copyTo(dst,mask);
if(img1.empty() || img2.empty())
{
printf("Can't read one of the images\n");
return -1;
}
// detecting keypoints
SiftFeatureDetector detector(400);
vector<KeyPoint> keypoints1, keypoints2;
detector.detect(dst, keypoints1);
detector.detect(img2, keypoints2);
// computing descriptors
SiftDescriptorExtractor extractor;
Mat descriptors1, descriptors2;
extractor.compute(dst, keypoints1, descriptors1);
extractor.compute(img2, keypoints2, descriptors2);
// matching descriptors
BFMatcher matcher(NORM_L2);
vector<DMatch> matches;
matcher.match(descriptors1, descriptors2, matches);
// drawing the results
namedWindow("matches", 1);
Mat img_matches;
drawMatches(dst, keypoints1, img2, keypoints2, matches, img_matches);
imshow("masked image",dst);
//imshow("matches", img_matches);
waitKey(0);
return 0;
}
My aim is to compare two different parts of two different images .
You can run above code after using
g++ above_code.cpp -o bincode -I /usr/include/ `pkg-config --libs --cflags opencv`
./bincode image1.png image2.png
It seems that I am passing a rectangular region to keypoint detector as a result , the keypoints1 are saved with coordinates relative to 151,115 .
So , I should pass a masked image to keypoint detector .
How can I create a matrix filled with zeroes (or 255) but with rectangular region at 151,115 copied from img1 ?
thanks.
The following copies source image to destination image based on mask.
Mat src = imread("source.jpg",-1),dst,mask;
Rect rect(151, 115, 42, 27);
mask = Mat::zeros(src.Size(),CV_8UC1);
rectangle(mask, Point(rect.x,rect.y),Point(rect.x+rect.width,rect.y+rect.height),Scalar(255),-1);
src.copyTo(dst,mask);
Although there's a better way for your problem, you can translate your keypoints to the size of the original image.

Intersection of plane and polygon mesh - coplanar triangles treatment in CGAL

I want to use CGAL to substitute the vtkCutter when rendering 2D slices of 3D polygonal meshes. To obtain high performance, I use AABB_tree and indeed, the intersections are produced much faster. However, when triangles (or their edges) are coplanar with the query plane, the results are rather arbitrary - likely due to numerical round-off issues. As I need high performance, I use the Simple_cartesian<double> kernel.
Is there a way to control this behaviour in CGAL? For example, can I specify some kind of tolerance so that if two points of a triangle are within this tolerance from a plane - the edge is considered to lie in the plane?
Cheers,
Rostislav.
The code:
#include <iostream>
#include <list>
#include <CGAL/Simple_cartesian.h>
#include <CGAL/AABB_tree.h>
#include <CGAL/AABB_traits.h>
#include <CGAL/AABB_face_graph_triangle_primitive.h>
#include <CGAL/Polyhedron_3.h>
#include <CGAL/IO/Polyhedron_iostream.h>
typedef CGAL::Simple_cartesian<double> K;
typedef K::FT FT;
typedef K::Point_3 Point;
typedef K::Vector_3 Vector;
typedef K::Plane_3 Plane;
typedef K::Segment_3 Segment;
typedef K::Triangle_3 Triangle;
typedef CGAL::Polyhedron_3<K> Polyhedron;
typedef std::list<Segment>::iterator Iterator;
typedef CGAL::AABB_face_graph_triangle_primitive<Polyhedron, CGAL::Default, CGAL::Tag_false> Primitive;
typedef CGAL::AABB_traits<K, Primitive> Traits;
typedef CGAL::AABB_tree<Traits> Tree;
int main()
{
Polyhedron polyhedron;
std::ifstream inFile("mesh.off");
inFile >> polyhedron;
std::ifstream planeIn("plane.txt");
double a[9];
for (int i = 0; i < 9; ++i) {
planeIn >> a[i];
}
Tree tree(polyhedron.facets_begin(), polyhedron.facets_end(), polyhedron);
tree.accelerate_distance_queries();
Point points[] = { { a[0], a[1], a[2] }, { a[3], a[4], a[5] }, { a[6], a[7], a[8] } };
Plane plane_query(points[0], points[1], points[2]);
std::vector<Tree::Intersection_and_primitive_id<Plane>::Type> segments;
tree.all_intersections(plane_query, std::back_inserter(segments));
return EXIT_SUCCESS;
}

Resources