in the galsim lib, what does the applyShear do? - galsim

does it do the same image distortion as applyTransformation?
is there still this function applyTransformation?
and about the applyShear function,
it only apply shear, no other lensing effect, i.e. no magnification, or no other higher order effects, right?

All the written documentation, examples and source code can be found on GitHub, at the following location. https://github.com/GalSim-developers/GalSim
I suggest looking there first, and if there aren't any docs that answer your question directly, you could always study the source code for the applyShear functionality you are describing.
Good luck

In version 1.1, the methods applyShear and applyTransformation were deprecated. The preferred methods to use are now shear and transform.
The shear method is typically used as sheared_obj = obj.shear(g1=g1, g2=g2) where g1, g2 are the components of the reduced shear to be applied. You can also give e1,e2 (distortions rather than shear), or g, beta or e, beta (giving the magnitude and position angle), among other possibilities. See the docs for the Shear class for more information about ways to specify a shear in GalSim.
However you specify the shear, the shear method will shear the given surface brightness profile by that amount in a manner that preserves total flux.
The transform method is somewhat more general in that you can transform by any arbitrary 2x2 coordinate transformation matrix. Specifically, you specify an arbitrary Jacobian: dudx, dudy, dvdx, dvdy, where (x,y) are the original coordinates and (u,v) are the transformed coordinates. With this method, you could apply a transformation that is equivalent to a shear, but you would need to manually calculate the correct terms in the Jacobian.
The other difference is that transform does not necessarily preserve flux. The flux is only preserved if the Jacobian has unit determinant. So depending on your use case, you may want to rescale the flux by the determinant when you are done.
My guess is that you will most often want to use shear rather than transform. Here is a sample use case (taken from demo5.py in the GalSim examples directory):
gal = galsim.Exponential(flux=1., half_light_radius=gal_re)
[...]
for [...]:
# Now in a loop to draw many versions of this galaxy with different shears, etc.
# Make a new copy of the galaxy with an applied e1/e2-type distortion
# by specifying the ellipticity and a real-space position angle
this_gal = gal.shear(e=ellip, beta=beta)
# Apply the gravitational reduced shear by specifying g1/g2
this_gal = this_gal.shear(g1=gal_g1, g2=gal_g2)
[...]
Hope this helps.
Note: The links above are current as of December, 2014. If they go stale, try navigating from the top level of the Doxygen documentation, which hopefully will still work.

I would just like to add to the response above from ne1410s (I don't have the reputation points to comment on his or her post).
In addition to that excellent information, there are thorough docstrings for most of the standard functionality, which you can check out using the python help() routine, i.e.
import galsim
help(galsim.GSObject.applyShear)
In the case of applyShear(), this routine is deprecated in the latest version of GalSim, but the docstring still exists and points the user to the new functionality. For older versions of GalSim, there is a complete docstring that explains how applyShear() works.
To summarize, going to the GalSim repository, you can get help in the following ways:
Go through the demos in examples/ (they are extensively commented).
Go through the quick reference guide in doc/
Go to the complete documentation including all docstrings, at http://galsim-developers.github.io/GalSim/
And then there are the docstrings for individual routines, which are accessible via help(). It should not, in general, be necessary to read the source code to learn how routines work.

Related

Convert H3Index to IJK Coordinate?

Is it possible to convert
from:
"H3Index Representation"
https://h3geo.org/docs/core-library/h3Indexing
to:
"IJK Coordinates"
https://h3geo.org/docs/core-library/coordsystems
?
Assuming there is an API for it:
Which API would have to be used ? And is this API available in the DLL/runtime library ?
If there is no API for it, is there another way ?
(Maybe vertex API can be used for this ?)
The current version 3.x DLL seems to be missing some APIs also marked as "internal functions" in the include headers.
We don't offer an external API for IJK coords - these coordinates would be difficult to work with, as they are only valid within a single icosahedron face, and crossing an icosahedron edge would result in a different coordinate system.
We do offer a relative IJ coordinate system through the API:
V3: experimentalH3ToLocalIj
V4: cellToLocalIj
These functions require an "origin" index that serves to anchor the coordinate system on a single icosahedron face, and they are capable of "unfolding" the icosahedron to cross over to one adjacent face, so they're generally going to be more reliable than direct access to the IJK coords. You can convert IJ to IJK coords fairly easily. Just note that different parts of the world will have different coordinate systems, hence the "origin" requirement - this is meant for local area use, not global use.
The main issue I was facing is missing API functions in the H3.DLL. To build the H3.DLL with all API implementations the following commands should be used:
Step 2: Configure for DLL release:
cmake ..\h3V4 -DBUILD_TESTING=OFF -DBUILD_SHARED_LIBS=ON -DCMAKE_WINDOWS_EXPORT_ALL_SYMBOLS=ON
Step 3:
cmake --build . --config Release
Once this is done the following API will become available:
_h3ToFaceIjk
Which can be used to convert an H3Index to a FaceIJK Index.
Where there will be 4 coordinates returned into a structure via a pointer.
The first coordinate is some kind of face number. Possible related to this icohedron.
The second,third,fourth are the IJK, they go up to a certain range.
The total coordinate system is indeed a bit messy when using this API.
It would indeed be usefull, I think, to base the IJK coordinates on some selected hexagon, so that this hexagon can functions as location 0,0,0,0 or just 0,0,0.
And then re-map all other hexagons around it. It might be even more usefull, if the coordinates can be negative and positive as well.
I will try and play with other conversion routines in the library, which might do this. I am not sure if it will be perfect.
So far the documentation seems to indicate that coordinates in FaceIJK system are also positive...
I could also try and come up with my own algorithm to try and attach desireable coordinates to the hexagons.
Basically
-1 -1
-1 +1
+1 +1
Would be desirable.
Pretend the above drawing is a hexagon or triangulization inside the hexagon... and those numbers are on the points/verteces of the hexagon.

Direct2D: Non-Affine transformation

https://learn.microsoft.com/en-us/windows/desktop/direct2d/direct2d-transforms-overview seems to be clear in that "Direct2D supports only affine (linear) transformations"
But what if I have a need to transform an image to some arbitrary points what are my options in 2019? I note this has been asked before Mapping corners to arbitrary positions using Direct2D but that was in 2012 and I am wondering if there is any current option?
I had naively assumed that if I had a projective transform matrix (from cv::getPerspectiveTransform for instance) then things would work. Guess it pays to RTFM before diving into using Direct2D.
You can probably use effects to achieve that, for example CLSID_D2D13DPerspectiveTransform or CLSID_D2D13DTransform. I believe it acts as post processing, so you prepare your image, set it as input, and draw with selected effect.

Understanding of NurbsSurface

I want to create a NurbsSurface in OpenGL. I use a grid of control points size of 40x48. Besides I create indices in order to determine the order of vertices.
In this way I created my surface of triangles.
Just to avoid misunderstandings. I have
float[] vertices=x1,y1,z1,x2,y2,z2,x3,y3,z3....... and
float[] indices= 1,6,2,7,3,8....
Now I don't want to draw triangles. I would like to interpolate the surface points. I thought about nurbs or B-Splines.
The clue is:
In order to determine the Nurbs algorithms I have to interpolate patch by patch. In my understanding one patch is defined as for example points 1,6,2,7 or 2,7,3,8(Please open the picture).
First of all I created the vertices and indices in order to use a vertexshader.
But actually it would be enough to draw it on the old way. In this case I would determine vertices and indices as follows:
float[] vertices= v1,v2,v3... with v=x,y,z
and
float[] indices= 1,6,2,7,3,8....
In OpenGL, there is a Nurbs function ready to use. glNewNurbsRenderer. So I can render a patch easily.
Unfortunately, I fail at the point, how to stitch the patches together. I found an explanation Teapot example but (maybe I have become obsessed by this) I can't transfer the solution to my case. Can you help?
You have set of control points from which you want to draw surface.
There are two ways you can go about this
Which is described in Teapot example link you have provided.
Calculate the vertices from control points and pass then down the graphics
pipeline with GL_TRIANGLE as topology. Please remember graphics hardware
needs triangulated data in order to draw.
Follow this link which shows how to evaluate vertices from control points
http://www.glprogramming.com/red/chapter12.html
You can prepare path of your control points and use tessellation shaders to
triangulate and stitch those points.
For this you prepare set of control points as patch use GL_PATCH primitive
and pass it to tessellation control shader. In this you will specify what
tessellation level you want. Depending on that your patch will be tessellated
by another fixed function stage known as Primitive Generator.
Then your generated vertices will be pass to tessellation evaluation shader
in which you can fine tune. Here you can specify outer or inner tessellation
level which will further subdivide your patch.
I would suggest you put your VBO and IBO like you have with control points and when drawing use GL_PATCH primitive. Follow below tutorial about how to use tessellation shader to draw nurb surfaces.
Note : Second method I have suggested is kind of tricky and you will have to read lot of research papers.
I think if you dont want to go with modern pipeline then I suggest go with option 1.

Finding the position of edge defects of a circular object with MATLAB

I have a problem finding defects at the edge of a circular object. It's hard to describe so I have a picture which may help a bit. I am trying to find the red marked areas, such as what is shown below:
I already tried matching with templates vision.TemplateMatcher(), but this only works well for the picture I made the template of.
I tried to match it with vision.CascadeObjectDetector() and I trained it with 150 images. I found only < 5% correct results with this.
I also tried matching with detectSURFFeatures() and then matchFeatures(), but this only works on quite similar defects (when the edges are not closed it fails).
Since the defects are close to the half of a circle, I tried to find it with imfindcircles(), but there I find so many possible results. When I take the one with the highest metric sometimes I get the right one but not even close to 30%.
Do any of you have an idea what I can try to find at least more than 50%?
If someone has an idea and wants to try something I added another picture.
Since I am new I can only add two pictures but if you need more I can provide more pictures.
Are you going to detect rough edges like that on smooth binary overlays as you provided before? For eg. are you making a program whose input consists of getting a black image with lots of circles with rough edges which its then supposed to detect? i.e. sudden rough discontinuities in a normally very smooth region.
If the above position is valid, then this may be solved via classical signal processing. My opinion, plot a graph of the intensity on a line between any two points outside and inside the circle. It should look like
.. continuous constant ... continuous constant .. continuous constant.. DISCONTINUOUS VARYING!! DISCONTINUOUS VARYING!! DISCONTINUOUS VARYING!! ... continuous constant .. continuous constant..
Write your own function to detect these discontinuities.
OR
Gradient: The rate of change of certain quantities w.r.t a distance measure.
Use the very famous Sobel (gradient) filter.
Use the X axis version of the filter, See result, if gives you something detectable use it, do same for Y axis version of filter.
In case you're wondering, if you're using Matlab then you just need to get a readily available and highly mentioned 3x3 matrix (seen almost everywhere on the internet ) and plug it into the imfilter function, or use the in-built implementation (edge(image,'sobel')) (if you have the required toolbox).

Matching photographed image with screenshot (or generated image based on data model)

first of all, I have to say I'm new to the field of computervision and I'm currently facing a problem, I tried to solve with opencv (Java Wrapper) without success.
Basicly I have a picture of a part from a Model taken by a camera (different angles, resoultions, rotations...) and I need to find the position of that part in the model.
Example Picture:
Model Picture:
So one question is: Where should I start/which algorithm should I use?
My first try was to use KeyPoint Matching with SURF as Detector, Descriptor and BF as Matcher.
It worked for about 2 pcitures out of 10. I used the default parameters and tried other detectors, without any improvements. (Maybe it's a question of the right parameters. But how to find out the right parameteres combined with the right algorithm?...)
Two examples:
My second try was to use the color to differentiate the certain elements in the model and to compare the structure with the model itself (In addition to the picture of the model I also have and xml representation of the model..).
Right now I extraxted the color red out of the image, adjusted h,s,v values manually to get the best detection for about 4 pictures, which fails for other pictures.
Two examples:
I also tried to use edge detection (canny, gray, with histogramm Equalization) to detect geometric structures. For some results I could imagine, that it will work, but using the same canny parameters for other pictures "fails". Two examples:
As I said I'm not familiar with computervision and just tried out some algorithms. I'm facing the problem, that I don't know which combination of algorithms and techniques is the best and in addition to that which parameters should I use. Testing it manually seems to be impossible.
Thanks in advance
gemorra
Your initial idea of using SURF features was actually very good, just try to understand how the parameters for this algorithm work and you should be able to register your images. A good starting point for your parameters would be varying only the Hessian treshold, and being fearles while doing so: your features are quite well defined, so try to use tresholds around 2000 and above (increasing in steps of 500-1000 till you get good results is totally ok).
Alternatively you can try to detect your ellipses and calculate an affine warp that normalizes them and run a cross-correlation to register them. This alternative does imply much more work, but is quite fascinating. Some ideas on that normalization using the covariance matrix and its choletsky decomposition here.

Resources