Obtaining the integral of a healpy map - healpy

I have a healpy map that is the deflection (gradient of the lensing potential), which was obtained during a particular lensing simulation of the CMB. I want to obtain a map of the lensing potential using healpy if possible. I notice that there is a healpy function alm2map_der1() which will give me a healpy map and its first derivative given the map's alms. I am assuming this first derivative is the gradient of the map - please correct me if I am wrong. Essentially I want to know if I can use healpy to do the backwards process of this. I want to remove the gradient and I just want the lensing potential.
So far, my attempt has been to use the relation between the deflection and lensing potential power spectra; Cls of deflection = l(l+1) * Cls of lensing potential, and rearranging this to: Cls of lensing potential = Cls of deflection / l(l+1), then using synfast to convert this back into a map. I do not seem to be getting the correct map.
Is there a better way to do what I am trying to do? Maybe even not using healpy?

I can't help with the first part, but I know that converting to the Cls destroys the orientation information. 'synfast' provides a map with the power spectrum that you input but with a random orientation. If I run 'synfast' on a list of Cls with only dipole power, I get a random dipole orientation every time I run:
import healpy
healpy.mollview(healpy.synfast([0,1],32,lmax=1))
I suggest working only with the alms if you want a map in the end.

Related

3D triangulation using HALCON

My aim is to calibrate a pair of cameras and use them for simple measurement purposes. For this purpose, I have already calibrated them using HALCON and have all the necessary intrinsic and extrinsic camera Parameters. The next step for me is to basically measure known lengths to verify my calibration accuracies. So far I have been using the method intersect_lines_of_sight to achieve this. This has given me unfavourable results as the lengths are off by a couple of centimeters. Is there any other method which basically triangulates and gives me the 3D coordinates of a Point in HALCON? Or is there any leads as to how this can be done? Any help will be greatly appreciated.
Kindly let me know in case this post Needs to be updated with code samples
In HALCON there is also the operator reconstruct_points_stereo with which you can reconstruct 3D points given the row and column coordinates of a corresponding pixel. For this you will need to generate a StereoModel from your calibration data that is then used in the operator reconstruct_points_stereo.
In you HALCON installation there is an standard HDevelop example that shows the use of this operator. The example is called reconstruct_points_stereo.hdev and can be found in the example browser of HDevelop.

MATLAB: Set points within plot polygon equal to zero

I am currently doing some seismic modelling and processing in MATLAB, and would like to come up with an easy way of muting parts of various datasets. If I plot the frequency-wavenumber spectrum of some of my data, for instance, I obtain the following result:
Now, say that I want to mute some of the data present here. I could of course attempt to run through the entire matrix represented here and specify a threshold value where everything above said value should be set equal to zero, but this will be very difficult and time-consuming when I later will work with more complicated fk-spectra. I recently learned that MATLAB has an inbuilt function called impoly which allows me to interactively draw a polygon in plots. So say I, for instance, draw the following polygon in my plot with the impoly-function:
Is there anything I can do now to set all points within this polygon equal to zero? After defining the polygon as illustrated above I haven't found out how to proceed in order to mute the information contained in the polygon, so if anybody can give me some help here, then i would greatly appreciate it!
Yes, you can use the createMask function that's part of the impoly interface once you delineate the polygon in your figure. Once you use create this mask, you can use the mask to index into your data and set the right regions to zero.
Here's a quick example using the pout.tif image in MATLAB:
im = imread('pout.tif');
figure; imshow(im);
h = impoly;
I get this figure and I draw a polygon inside this image:
Now, use the createMask function with the handle to the impoly call to create a binary mask that encapsulates this polygon:
mask = createMask(h);
I get this mask:
imshow(mask);
You can then use this mask to index into your data and set the right regions to 0. First make a copy of the original data then set the data accordingly.
im_zero = im;
im_zero(mask) = 0;
I now get this:
imshow(im_zero);
Note that this only applies to single channel (2D) data. If you want to apply this to multi-channel (3D) data, then perhaps a multiplication channel-wise with the opposite of the mask may be prudent.
Something like this:
im_zero = bsxfun(#times, im, cast(~mask, class(im)));
The above code takes the opposite of the polygon mask, converts it into the same class as the original input im, then performs an element-wise multiplication of this mask with each channel of the input separately. The result will zero each spatial location that's defined in the mask over all channels.

How to detect a Triangle gesture with kinect?

I am trying to implement a gesture recognition system which interprets the geometric gestures user makes and draws it on screen,
I have some idea of how circle can be recognized, however I have no clue how to get started with triangle recognition.
The data I have is X and Y coordinates of all points the gesture passed through. I get this data by tracking right hand.
I found something online called Hough Transform, which is used for detecting lines but I am not sure whether it will work for discrete collections of points.
Any ideas folks?
If you already have an x,y pair for the hand, the simplest thing that comes to mind is try the $1 Unistroke Recognizer.
A handy thing to look at is Dynamic Time Warping(DTW).
I've seen a fun Processing/SimpleOpenNI project that makes
use of that technique and the full skeleton called KineticSpace.
Since it's open-source might be worth having a peak.
I'd recommend trying the $1 Unistroke Recognizer first. You probably
need to work out a system to mimic press/release (perhaps using
the sign of the hand's velocity on z (positive to negative transitions/
negative to positive transitions) ?).
HTH
You can look for a space filling curve. It reduces the 2 dimension and reorder the points. It also add some spatial information. Maybe you can train or compare the new reordered 1d index with some simulated annealing or ant colony optimization?! A space filling curve is used in map tiling programs.

PointCloud with multiple Kinects

I am trying to make a PointCloud mapping user with multiple kinects on Processing. I get the user's front and back with 2 kinects on opposite sides and generate both PointClouds.
The trouble is that the PointClouds X/Y/Z are not syncronized, it just puts the two of them on screen and it surely looks messy. There is a way to calculate or make a comparison between them, to translate the second PointCloud to "join" the first? I could translate the position manually, but if I move the sensors it will go off again.
Supposing all the Kinects are stationary, I guess you would have to go in this order:
decide on which Kinect to use as a global reference,
get parameters for a 3D transformation for each of the other Kinects - I'd try to
use PMatrix3D and applyMatrix(), although it may be slow,
apply the transformations on to each of the other Kinects' point clouds and draw
the clouds
I don't (yet) know how to get the transformation parameters for a Procrustes transformation, but assuming they won't change, you'd probably have to set up multiple reference points, maybe by displaying the point clouds from each pair of Kinects and registering the points you know are the same in both point clouds. After getting enough of them, construct a PMatrix3D and apply it inside push/popMatrix.
This is the approach used by this guy: http://www.youtube.com/watch?v=ujUNj1RDL4I
An alternative approach would be to use an Iterative Closest Point algorithm and construct 3D transform from its output. I'd really like an ICP or PCL library for Processing, if anyone knows a good one.

Liquify filter/iwarp

I'm trying to build something like the Liquify filter in Photoshop. I've been reading through image distortion code but I'm struggling with finding out what will create similar effects. The closest reference I could find was the iWarp filter in Gimp but the code for that isn't commented at all.
I've also looked at places like ImageMagick but they don't have anything in this area
Any pointers or a description of algorithms would be greatly appreciated.
Excuse me if I make this sound a little simplistic, I'm not sure how much you know about gfx programming or even what techniques you're using (I'd do it with HLSL myself).
The way I would approach this problem is to generate a texture which contains offsets of x/y coordinates in the r/g channels. Then the output colour of a pixel would be:
Texture inputImage
Texture distortionMap
colour(x,y) = inputImage(x + distortionMap(x, y).R, y + distortionMap(x, y).G)
(To tell the truth this isn't quite right, using the colours as offsets directly means you can only represent positive vectors, it's simple enough to subtract 0.5 so that you can represent negative vectors)
Now the only problem that remains is how to generate this distortion map, which is a different question altogether (any image would generate a distortion of some kind, obviously, working on a proper liquify effect is quite complex and I'll leave it to someone more qualified).
I think liquefy works by altering a grid.
Imagine each pixel is defined by its location on the grid.
Now when the user clicks on a location and move the mouse he's changing the grid location.
The new grid is again projected into the 2D view able space of the user.
Check this tutorial about a way to implement the liquify filter with Javascript. Basically, in the tutorial, the effect is done transforming the pixel Cartesian coordinates (x, y) to Polar coordinates (r, α) and then applying Math.sqrt on r.

Resources