Okey, I am so lost in here that I cannot make even a concrete question, so I shall be very general and hope that someone can point me in the right direction.
I am producing some scientific plots in Julia with PyPlot, and I am very satisfied with the results ( adequate and clear estetics and I can handle the syntax to create very complex images ). But I need to produce a so called "heatmap" ( a 2D bitmap image ) in which the user should be able to select a set of points of the image with the mouse. The selection, which will be confined to a discrete grid, shall be stored in some Iterable, an Array or similar. I have no idea where to start, if at the same PyPlot library or using something like Gtk or GtkReact (this last one I couldn't get the examples running). Can I be pointed to the right direction?
PyPlot is an interface to Python's matplotlib which makes static plots.
Use Plotly inside a Jupyter notebook instead: https://plot.ly/julia/heatmaps/,
Related
working with Matplotlib I have produced some resistivity cross sections of the soil, obtaining pictures like this:
Now I would like to display all those sections in 3D so as to visualise better the spatial distribution of resistivity in the field (i.e. a so-called fence diagram). I would also like to plot the 2D map of the site where those measurements were carried out at the base of my plot (say on the XY plane).
As far as I have seen this is not feasible (or at least not convenient) with Matplotlib in 3D hence I decided to switch to Mayavi.
My questions are:
is it feasible georeferenced rasters and then properly place them on the correct (vertical) planes (not necessarily parallel to the cartesian ones) with Mayavi? Does imshow() serves this purpose?
is it better to recreate the contours in Mayavi at the proper locations? If this is the case I did not find a function to create contours from unstructured data (the input images were created with tricontour/tricontourf in Matplotlib). I do not think interpolating over a structured grid in scipy would do, given the non convex domain.
Ok, answering my own question:
mesh = mlab.triangular_mesh
surf = mlab.pipeline.surface(mesh)
seems to do the job.
To be consistent with the previous work, the triangulation, duly masked, can be directly imported from Matplotlib.
I was recently doing some EDA on a data set.I created a boxplot, a countplot and a violinplot using seaborn and created an image using matplotlib.
But the result is not very easy on the eye and looks very congested.
Is this normal? Any way to make it better ?
This is the image of the notebook
The answer depends on your problem:)
In my opinion, the spacing is not the problem, the aspect ratio is. Your individual plots don't have enough height. So try to change te aspect ratio and see if you like it better. So change the first line to:
plt.Figure(figsize=(8, 12)
for example.
I have two binary image like this. I have a data set with lots of picture like at the bottom but with differents signs.
and
I would like to compare them in order to know if it's the same figure or not (especially inside the triangle). I took a look in Sift and Surf feature but it's doesn't work well on this type of picture (it find matchning point whereas the two picture are different,especially inside).
I also hear about SVM but i don't know if i have to implement it for this type of problem.
Do you have an idea ?
Thank you
I think you should not use SURF features on the binary image as you have already discarded a lot of information at that stage with your edge detector.
You could also use the Linear or Circle Hough Transform that in this case could tell you a lot about image differences.
If you wat to find 2 exactly identical images, simply use hash functions like md5.
But if you want to find related ( not exatcly identical) images, you are running in trouble ;). look for artificial neural network libs...
I have to simulate facial expressions on a face image ( say open mouth ). For this I first extracted the facial feature points and found the corners of the lips. Now I need to deform the image by moving the points down.
In the above image I need to move the points ( 2 ) and ( 3 ) to some distance left and right respectively. And point ( 18 ) and ( 19 ) littele bit down. So that I will get an expression like opened mouth.
My Questions:
1) Is this the above way right to proceed to simulate facial expression?
2) If it is right how to move the points and create a new image in opencv?
A fairly recent survey and course of techniques people have used in this area is here:
http://old.siggraph.org/publications/2006cn/course30.pdf
TL:DR. There is no "right" way to do it, in any absolute sense. You need to define your goal in a way that is computable. Then figure out what additional (prior) information you need to reach it, in addition to the image data themselves. Fiddling with "texture warping" or other interpolation schemes before you decide what you need to do is a waste of time.
You mention "an expression like an opened mouth", and I interpret that to mean that you'd like to produce an image similar to what the real face would look like if the subject had been photographed with their mouth open. The markers you found obviously do not give enough information about that - in particular, they do not express any notion of "mouth". In fact, that notion is nowhere to be found in the image. So, strictly speaking, your task is unsolvable unless you throw more information into it.
I suggest you take a look at the paper pointed above, and rethink your problem again.
so, here we go again..
i've seen a lot of people using delauny triangulation from those points, and then texture warping or distortion in opengl or even opencv.
https://github.com/MasteringOpenCV/code/tree/master/Chapter7_HeadPoseEstimation
looks quite related, parts of the "MasteringOpencv" book are on google.books
I'm messing around with image manipulation, mostly using Python. I'm not too worried about performance right now, as I'm just doing this for fun. Thus far, I can load bitmaps, merge them (according to some function), and do some REALLY crude analysis (find the brightest/darkest points, that kind of thing).
I'd like to be able to take an image, generate a set of control points (which I can more or less do now), and then smudge the image, starting at a control point and moving in a particular direction. What I'm not sure of is the process of smudging itself. What's a good algorithm for this?
This question is pretty old but I've recently gotten interested in this very subject so maybe this might be helpful to someone. I implemented a 'smudge' brush using Imagick for PHP which is roughly based on the smudging technique described in this paper. If you want to inspect the code feel free to have a look at the project: Magickpaint
Try PythonMagick (ImageMagick library bindings for Python). If you can't find it on your distribution's repositories, get it here: http://www.imagemagick.org/download/python/
It has more effect functions than you can shake a stick at.
One method would be to apply a Gaussian blur (or some other type of blur) to each point in the region defined by your control points.
One method would be to create a grid that your control points moves and then use texture mapping techniques to map the image back onto the distorted grid.
I can vouch for a Gaussian Blur mentioned above, it is quite simple to implement and provides a fairly decent blur result.
James