I have a list which is almost surely some sort of a wave-like oscillation:
y(x): {{x_1, y_1}, ...,{x_n, y_n}}
and want to find wavelets that interpolate this data best using Mathematica. However, Mathematica's wavelet toolbox accepts standard sequential (uniform) series rather than non-uniform grids.
Possible solutions are either to firstly interpolate y to a uniform x grid and then proceed with wavelet decomposition or to find a fit to wavelet functions explicitly using FindFit routines without referring to the toolbox.
Is there any other solutions?
Related
Say we are making a program to render the plot of a function (black box) provided by the user as a sequence of line segments. We want to get the minimum number of samples of the function so the resulting image "looks" like the function (the exact meaning of "looks" here is part of the question). A naive approach might be to just sample at fixed intervals but we can probably do better than that eg by sampling the "curvy bits" more than the "linear bits". Are there systematic approaches/research on this problem?
This reference can be helpful which is using the combined sampling method. Before that its related works explain more about other methods of sampling:
There are several strategies for plotting the function y = f(x) on interval Ω = [a, b]. The
naive approach based on sampling of f in a fixed amount of the equally spaced points is
described in [20]. The simple functions suffer from oversampling, while the oscillating curves
are under-sampled; these issues are mentioned in [14]. Another approach based on the interval
constraint plot constructing a hull of the curve was described in [6], [13], [20]. The automated
detection of a useful domain and a range of the function is mentioned in [41]; the generalized
interval arithmetic approach is described in [40].
A significant refinement is represented by adaptive sampling providing a higher sampling
density in the higher-curvature regions. The are several algorithms for the curve interpolation preserving the speed, for example: [37], [42], [43]. The adaptive feed rate technique
is described in [44]. An early implementation in the Mathematica software is presented in
[39]. By reducing data, these methods are very efficient for the curve plotting. The polygonal approximation of the parametric curve based on adaptive sampling is mentioned in the
several papers. The refinement criteria, as well as the recursive approach, are discussed in
[15]. An approximation by the polygonal curves is described in [7], the robust method for
the geometric and spatial approximation of the implicit curves can be found in [27], [10], the
affine arithmetic working in the triangulated models in [32]. However, the map projections
are never defined by the implicit equations. Similar approaches can be used for graph drawing
[21].
Other techniques based on the approximation by the breakpoints can be found in many
papers: [33], [9], [3]; these approaches are used for the polygonal approximation of the closed
curves and applied in computer vision.
Hence, these are the reference methods that define some measures for a "good" plot and introduce an approach to optimize the plot base on the measure:
constructing a hull of the curve
automated detection of a useful domain and a range of the function
adaptive sampling: providing a higher sampling density in the higher-curvature regions
providing a higher sampling density in the higher-curvature regions
approximation by the polygonal curves
affine arithmetic working in the triangulated models
combined sampling: providing the polygonal approximation of the parametric curve involving the discontinuities will be presented. The modified method will be used for the function f(x) reconstruction and plot. Based on the ideas of splitting the domain into the subintervals without the discontinuities, it represents a typical problem solvable by the recursive approach.
In most graphics libraries I've seen, there's some function that returns the determinant from 3x3 and 4x4 matrices, but I have no idea when you'd actually need to use the determinant in 3D computer graphics.
What are some examples of using a determinant in 3D graphics programming?
Off the top of my head...
If the determinant is 0 then the matrix cannot be inverted, which can be useful to know.
If the determinant is negative, then objects transformed by the matrix will reversed as if in a mirror (left handedness becomes right handedness and vice-versa)
For 3x3 matrices, the volume of an object will be multiplied by the determinant when it is transformed by the matrix. Knowing this could be useful for determining, for example, the level of detail / number of polygons to use when rendering an object.
In 3D vector graphics
there are used 4x4 homogenuous transform matrices and we need booth direct and inverse matrices which can be computed by (sub)determinants. But for orthogonal matrices there are faster and more accurate methods like
full pseudo inverse matrix.
Many intersection tests use determinants (or can be converted to use them) especially for quadratic equations (ellipsoids,...) for example:
ray and ellipsoid intersection accuracy improvement
as Matt Timmermans suggested you can decide if your matrix is invertible or left/right handed which is useful to detect errors in matrices (accuracy degradation) or porting skeletons in between formats or engines etc.
And I am sure there area lot of other uses for it in vector math (IIRC IGES use them for rotational surfaces, cross product is determinant,...)
The incircle test is a key primitive for computing Voronoi diagrams and Delaunay triangulations. It is given by the sign of a 4x4 determinant.
(picture from https://www.cs.cmu.edu/~quake/robust.html)
I have large matrix (image) and a small template. I would like to convolve the small matrix with the larger matrix. For example, the blue region is the section that I want to be used for convolution. In other words, I can use the convolution for all of the image, but since the CPU time is increased, therefore, I would like to just focus on the desired blue part.
Is there any command in MATLAB that can be used for this convolution? Or, how I can force the convolution function to just use that specific irregular section for convolution.
I doubt you can do an irregular shape (fast convolution is done with 2D FFT, which would require a square region). You could optimize it by finding the shape's bounding box and thus discarding the empty border.
#Nicole i would go for the fft2(im).*fft(smallIm) which is the equivalent for conv2(im,smallIm).
as far as recognizing the irregular shape you can use edge detection like canny and find the values of the most (left,right,top,bottom) dots, since canny returns a binary (1,0) image and prepare a bounding box, using the values. however this will take some time to create. and i'm not sure about how much faster will this be.
Basically I am solving the diffusion equation in 3D using FFT and one of the ways to parallelise this is to decompose the 3D FFT in 2D FFTs.
As described in this paper: https://cmb.ornl.gov/members/z8g/csproject-report.pdf
The way to decompose a 3d fft would be by doing:
2d fft in xy direction
global transpose
1d fft in z direction
Basically, my problem is that I am not sure how to do this global transpose (as I assume it's transposing a 3d array I suppose). Anyone has came accross this? Thanks a lot.
Think of a 3d cube with nx*ny*nz elements. The 3d FFT of these elements is mathematically 3 stages of 1-d FFTs, one along each axis:
Do ny*nz transforms along the X axis, each transform handles nx elements
nx*nz transforms along the Y axis
nx*ny transforms along the Z axis
More generally, an N-dimensional FFT (N>1) is composed of many (N-1)-dimensional FFTs along that axis.
If the signal is real and you have an FFT that can return the half spectrum, then stage 1 would be about half as expensive (real FFT is cheaper), the remaining stages need to be complex, but they only need to have about half as many transforms. So the cost is roughly half.
If your 1d FFT can read input elements that are strided and pack the output into a contiguous buffer, then you end up doing a transposition at each stage.
This is how kissfft performs multi-dimensional FFTs.
P.S. When I need to get a mental pictures of higher dimensions, I think of:
sheets of paper with matrices of numbers (2d), in folders of numbered papers (3d), in numbered filing cabinets (4d), in numbered rooms (5d), in numbered buildings (6d), and so on ... So I can visualize the "filing cabinet" dimension
The "global transposition" mentioned in the paper is not a mathematical operation, but a rearrangement of data between the distributed memory machines.
The data calculated on one machine in step 1 has to be transferred to all other machines, vice versa, for step to. It has nothing to do with a matrix transposition.
I have a set points whose coordinates are given by the arrays x, y and z and the value of the density field in each point is stored in the array d.
I would like to reconstruct the density field on a uniform grid. What's the best algorithm to do that?
I know that in python, the scipy module come in handy with the griddata function but I would like to write my own code, I just need a hint.
If you have some sort of scalar field and the points are the origins of the field, you can implement a brute force approach by walking all lattice points and calculating the field intensity given the sources. There are both recursive methods that allow "blanking" wide volumes where the field is more or less constant, and techniques to save some CPU time by calculating the variations from one point to the next.
If the points you have are samplings of a value, then you will have to decompose your space in volumes and interpolate the values. You can employ a simple Voronoi decomposition - this is usually done in 2D for precipitation measurements - or a Delaunay tetrahedralization (you can look into TetGen's documentation). The first approach assumes that the function is constant throughout each Voronoi volume; the last allows rendering a trilinear interpolation.
If you need to smooth a 3D grid, the trilinear interpolation looks like the best approach.
There are also other methods used for fast visualization, that involve maintaining a list of 3D points in order of distance from any one given point in your regular grid. When moving through the grid, you recalculate distances using quadratic increments. Then, you perform a simple interpolation based on a subset of points of chosen cardinality (i.e., if you consider the four nearest points at distances d1..d4, you would calculate the value in P by proportionally weighing the values v1..v4). This approach is fast and easy to implement by yourself, but be warned that it underperforms wherever the minimum distance between points is less than the lattice step (you can compensate by considering more points where this happens; and the effect is less evident if the sampled function is smooth at the same scale).
If you want to implement a mathematical method yourself, you need to learn the theory, of course. In this case, it's 3D scattered data interpolation.
Wikipedia, MATLAB help and scipy help say there are at least half a dozen different methods. WP has a fairly good description of them and there's a comparison article but I strongly suggest you find something in your native language on such a terminology-intensive subject.
One approach is to form the Delaunay triangulation of the scattered points [x,y,z], (actually a tetrahedralisation in your 3d case!) and perform interpolation within each element using a linear representation of the density field, defined at the tetrahedron vertices.
To evaluate the density at each structured grid point you would (i) determine which tetrahedron the point lay within and (ii) evaluate the linear interpolant.
Forming the Delaunay triangulation is non-trivial, put there are a few good libraries that can be used for this, depending on your language of choice. One good option is CGAL.
Hope this helps.