Pretty much sums it up,
The only thing that isn't flipped is bevy_inspector_egui.
positive x coordinates are on the left and negative ones are on the right.And images (they're called Textures by the thing I loaded them with) are flipped.
Has anyone encountered the same problem or knows how to solve it?
I'm using Bevy 0.6.0 because it is the version the tutorial I followed used.
If using bevy 0.7.0 solved the problem could you please send a link to a tutorial / docs explaining how to refactor the code ( I couldn't get the 0.7 to work because of Bevy::prelude::audio::Audio isn't Bevy_audio::Audio kinds of error or something like for a lot of things.)
Related
I'm new to Threejs and I have been using the EdgesHelper which achieves the look I want for now. But I have two questions...
What is the default edge width/how is it calculated?
Can the edge width be changed...
I have searched around and I'm pretty sure that due to some limitation (not of threejs of Windows I believe) that there is no simple method to change the thickness of the edges (?). Alot of the examples I found that have thicker edges would only work on a particular geometry (e.g. doesn't seem universal).
Perhaps I am wrong but I would have thought that this would be a very common requirement? Rather then spend hours/days/weeks trying to get what i want myself (which I'm not even sure I personally would be able to do), does anyone know of a way to have control over the edge thickness, an existing example or a library that someone has already done that can work on any shape (any imported obj for example)
Many thanks
Coming back to this as Wilt mentioned there are other threads on this. Basically you cannot change the thickness due to a limitation in ANGLE, there are some work around like the THREE.MeshLine (also mentioned in the link Wilt stated) but i found most work aroudns had some limitations for what I wanted
https://mattdesl.svbtle.com/drawing-lines-is-hard explains what is difficult to it in lines.
He also has a library called https://github.com/mattdesl/three-line-2d which should make lines easier to use.
I'd like to program a detection of a rectangular sheet of paper which doesn't absolutely need to be perfectly straight on each side as I may take a picture of it "in the air" which means the single sides of the paper might get distorted a bit.
The app (iOs and android) CamScanner does this very very good and Im wondering how this might be implemented. First of all I thought of doing:
smoothing / noise reduction
Edge detection (canny etc) OR thresholding (global / adaptive)
Hough Transformation
Detecting lines (only vertically / horizontally allowed)
Calculate the intercept point of 4 found lines
But this gives me much problems with different types of images.
And I'm wondering if there's maybe a better approach in directly detecting a rectangular-like shape in an image and if so, if maybe camscanner does implement it like this as well!?
Here are some images taken in CamScanner.
These ones are detected quite nicely even though in a) the side is distorted (but the corner still gets shown in the overlay but doesnt really fit the corner of the white paper) and in b) the background is pretty close to the actual paper but it still gets recognized correctly:
It even gets the rotated pictures correctly:
And when Im inserting some testing errors, it fails but at least detects some of the contour, but always try to detect it as a rectangle:
And here it fails completely:
I suppose in the last three examples, if it would do hough transformation, it could have detected at least two of the four sides of the rectangle.
Any ideas and tips?
Thanks a lot in advance
OpenCV framework may help your problem. Also, you can look to this document for the Android platform.
The full source code is available on Github.
Fairly new to d3.js, I need to colorize my choropleth map depending on 5 classes of colors.
Supposing my data input are distribute through the values of 0-100, if I use this code:
urban_colore = d3.scale.quantize().domain([0, 100]).range(colorbrewer.Oranges[5])
it works fine, but, I obtain a coloring based on 5 classes equally distributed in that range: 0-20, 20-40, 40-60,
60-80, 80-100.
The matter is that I need to represent my data based in a different, not equal distribute, color classes: from 0 to 20, from 21 to 50, from 51 to 80, over 80.
I'm really trying to understand wich scale, domain and range I have to set up, but I can't find the way to.
Can anyone help me with the right line of code (and some explanation)?
I am not so sure it's me the one who has to answer my own question. But well, thanks to AmeliaBR and FernOfTheAndes, reading the very good and clear stuff they wrote, I got the solution. Sometimes the d3 Api are a bit esoteric in some statements, and the new adept could get confused. But going to this answer you'll find the differences from a quantize scale and the threshold ones, and in this one the threshold scales are explained very well, leaving no more dilemmas about.
This FIDDLE is the icing on the cake.
So at the end, even if it's not my cup of the, this code works good for me:
`var urban_colore = d3.scale.threshold().domain([20,50,80,100]).range(colorbrewer.Oranges[5]);`
(but you must have at least the d3 version 3.0 if not you get a "Uncaught TypeError: Object # has no method 'threshold'error!" as it arrived to me)
I've been researching the Kinect API and programming with the new SDK (1.5) for a few weeks now, and I'm basically trying to find where the eyes are in each image streamed from the Kinect sensor. I then want to get the RGB values for the pixels that make up the eyes. Though some variables, like pFaceModel2DPoint and pPts2D (both in Visualize.cpp), claim to store the x,y values for all 86 points that make up the face in the colorImage (of type IFTImage*), I have tested and re-tested these variables but cannot access worthwhile data from these variables.
Furthermore, even if these x,y values corresponding to the eyes were correct for the given image, I cannot find out how to access the RGB values for each pixel desired. I know the macro (FTIMAGEFORMAT_UINT8_B8G8R8A8) to find the format in which the pixel data is stored, and I know that byte* pixels = colorImage->GetBuffer() will give the buffer stream for the current image streaming from the Kinect, but doing something as simple as pixels[rowNum*num_cols_per_row + colNum] = [...] inside of a for loop does not yield anything useful.
I've been really discouraged and disappointed that I cannot get this working, but I have searched through so many sites and search engines for any resolution to a problem close to mine and have found nothing. I wrote my own code several times using OpenCV and the Kinect, just the Kinect itself, and modifications of the MultiFace sample from the SDK. (These variables and functions listed above are from the MultiFace sample.) Any help would be extremely appreciated. Thank you!
Update: The Kinect API was unclear, hence my asking the question, but I have solved this problem after some trial and error. The image format is actually RGBX (formatted as BGRX), so elements 0-3 in the byte* correspond to pixel 0, elements 4-7 correspond to pixel 1, etc. Pretty simple; I had just gotten confused between the different methods of handling the image stream because there are a few GetBuffer-type calls in the same header file. Hopefully this will help someone else!
So, I initially assumed that finding a chessboard in an image should be trivial because it is such an easily defined object. However it isnt so easy and I was wondering if someone knows how the chessboard finder "cvFindChessboardCorners" works in OpenCV. Ive tried googling it but havnt managed to find the algorithm. Im guessing maybe the following:
1)Binarize
2)Open and close to eliminate small clusters
A)
2)Find Harris corners
3)Create distance matrix between all points in image
4)...?
B)
2)Find hough transform
3)all significant lines are checked for where they intersect. If 4 or more lines intersect at a point then these lines are part of the chessboard. This includes the point at infinity.
4)?
Anyone know exactly?
It's pretty ... complicated :) If you want to know exactly, the source of opencv would be the place to look - in opencv 2.2 it's in modules/calib3d/src/calibinit.cpp line 219. It also has a DEBUG_CHESSBOARD compile switch to be able to see how it works.