How to do the animated parametric plot with non-static trail - animation

Application used: everycircuit
Example circuits used (the book icon in the left sidebar):
: "LED Array", "RC step response"
So,
the image is of a parametric plot of some quantities on vertical axis (legend indicated by #3 in image), and one quantity on horizontal axis (legend indicated by #2)
in this, there's one parameter with respect to which all values are changing (which is time)
in the plot, the trail of values is kept for some time then it is erased - the keeping behaviour helps when the system is stable, and the erasing helps a lot to declutter the plot and zoom into the new relevant area/scale when the system changes its characteristics
How to do this animation in other code based data analysis software? like python/octave/julia etc?
Animation i.e.
the values changing with change in the parameter (i.e. the normal plot),
but the trail shown for only some time (hence, non-static)
while still retaining some way to change the parameter, and see which point corresponds to that (also known as "track" the plot)

Related

How to identify if a set of lines is similar to a shape

Currently I have a program that allows the user to paint on it by capturing the mouse position every 0.05 seconds and drawing a line between a point and the next. With that setup I am looking for a way to identify shapes like a circle, a rectangle or the letter 'P'.
My current algorithm divides the screen on sections, then marks the sections with points recorded by the player and makes a matrix with the marked sections, then compares that matrix with every shape matrix.
This lacks any kind of support for rotations, sizes or positions. Also the control of the threshold is tricky returning in most cases fake results.
I need an algorithm that allows to identify for example a ' P ' as a ' P '.
Note: My current application is running on a c++ framework so any libraries or tools are welcome but I am interested on the algorithm behind.
Edit: After thinking around the problem I have changed the current grid on the screen, instead of that I capture the points and shift them to resize
the shape so it fits on a grid and over that grid compare with the known shapes.
Picture of the process
This solves the position and size problems while being fast enough, also rotating the input and then resizing in a loop may solve the rotation problem (seems though it would have an high cost and won't be very reliable)
I would gladly welcome alternative methods of handling shape comparison or the rotation.
After thinking around the problem I have changed the current grid on the screen, instead of that I capture the points and shift them to resize
the shape so it fits on a grid and over that grid compare with the known shapes.
Picture of the process
This solves the position and size problems while being fast enough, also rotating the input and then resizing in a loop may solve the rotation problem (seems though it would have an high cost and won't be very reliable)

Three.js tubeGeometry not coloring properly

I have a code that lets users enter data and plots it with a tube geometry. The code seems to work fine most of the time, however, one of the test data sets is not coloring properly.
Here is an example page for a site that I am building that solves for the position and velocity of a bungee jumper. Scroll to the bottom of the page and you will see a three js environment with a sin wave and a plot of the position of the jumper. These two items are charted with separate color maps and you can see that the sin wave is colored properly but the data is not.
At first I thought that maybe the data was too sparsely populated, but that was not the problem.
The code for this is too long to really paste here, but the fact that it charts right for all other data sets makes me think that I am missing something inherent to the tubeGeometry function.
Any ideas as to why the one tube is miscolored?
UPDATE: When I add additional interpolated points between each existing point in the data set, the error lessons. The more the padding, the less the error. This leads me to think that the error is due to the difference between the interpolation of the spline function from Three.SplineCurve3 and the true data. This would also explain why my other examples work fine since they are all sinusoid data.
How can I prevent SplineCurve3 from doing this, or what else can I use to create the Tube geometry?
I guess it is the mesh length counting problem (three did not count length on vector+vector+vector but by mesh.position+bounding radius)
Maybe you can separate curve to parts and color each part independent on their lenght.
There are some working approaches:
https://stemkoski.github.io/Three.js/Graphulus-Curve.html
https://stemkoski.github.io/Three.js/Graphulus-Surface.html
https://stemkoski.github.io/Three.js/Graphulus-Function.html

What is the main idea of creating click heatmap?

in one of my projects, I would like to create heatmap of user clicks. I was searching a while and found this library - http://www.patrick-wied.at/static/heatmapjs/examples.html . That is basically exactly what I would like to make. I would like to create heatmap in SVG, if possible, that is only difference.
I would like to create my own heatmap and I'm just wondering how to do that. I have XY clicks position. Each click has mostly different XY position, but there can be exceptions time to time, a few clicks can have the came XY position.
I found a few solutions based on grid on website, where you have to check which clicks belong into the same column in this grid and according to these informations you are able to fill the most clicked columns with red or orange and so on. But it seems a little bit complicated to me and maybe slower for bigger grids.
So I'm wondering if there is another solution how to "calculate" heatmap colors or I would like to know the main idea used in library above.
Many thanks
To make this kind of heat map, you need some kind of writable array (or, as you put it, a "grid"). User clicks are added onto this array in a cumulative fashion, by adding a small "filter" sub-array (aligned around each click) to the writable array.
Unfortunately, this "grid" method seems to be the easiest, simplest way to get that kind of smooth, blobby appearance. Fortunately, this kind of operation is well-supported by software and hardware, under the name "computer graphics".
When considered as a computer graphics operation, the writable array is called an "accumulation buffer". The filter is what gives you the nice blobby appearance, even with a relatively small number of clicks -- you can tweak the size of the filter according to the needs of your application.
After accumulating the user clicks, you will need to convert from the raw accumulated values to some kind of visible color scale. This may involve looking through the entire accumulation buffer to find the largest value, and mapping your chosen color scale accordingly. Alternately, you could adjust your scale according to the number of mouse clicks, or (as in the demo you linked to) just choose a fixed scale regardless of the content of the buffer.
Finally, I should mention that SVG is not well-adapted to representing this kind of graphic. It should probably be saved as some kind of image file (.jpg or .png) instead.

Invoice / OCR: Detect two important points in invoice image

I am currently working on OCR software and my idea is to use templates to try to recognize data inside invoices.
However scanned invoices can have several 'flaws' with them:
Not all invoices, based on a single template, are correctly aligned under the scanner.
People can write on invoices
etc.
Example of invoice: (Have to google it, sadly cannot add a more concrete version as client data is confidential obviously)
I find my data in the invoices based on the x-values of the text.
However I need to know the scale of the invoice and the offset from left/right, before I can do any real calculations with all data that I have retrieved.
What have I tried so far?
1) Making the image monochrome and use the left and right bounds of the first appearance of a black pixel. This fails due to the fact that people can write on invoices.
2) Divide the invoice up in vertical sections, use the sections that have the highest amount of black pixels. Fails due to the fact that the distribution is not always uniform amongst similar templates.
I could really use your help on (1) how to identify important points in invoices and (2) on what I should focus as the important points.
I hope the question is clear enough as it is quite hard to explain.
Detecting rotation
I would suggest you start by detecting straight lines.
Look (perhaps randomly) for small areas with high contrast, i.e. mostly white but a fair amount of very black pixels as well. Then try to fit a line to these black pixels, e.g. using least squares method. Drop the outliers, and fit another line to the remaining points. Iterate this as required. Evaluate how good that fit is, i.e. how many of the pixels in the observed area are really close to the line, and how far that line extends beyond the observed area. Do this process for a number of regions, and you should get a weighted list of lines.
For each line, you can compute the direction of the line itself and the direction orthogonal to that. One of these numbers can be chosen from an interval [0°, 90°), the other will be 90° plus that value, so storing one is enough. Take all these directions, and find one angle which best matches all of them. You can do that using a sliding window of e.g. 5°: slide accross that (cyclic) region and find a value where the maximal number of lines are within the window, then compute the average or median of the angles within that window. All of this computation can be done taking the weights of the lines into account.
Once you have found the direction of lines, you can rotate your image so that the lines are perfectly aligned to the coordinate axes.
Detecting translation
Assuming the image wasn't scaled at any point, you can then try to use a FFT-based correlation of the image to match it to the template. Convert both images to gray, pad them with zeros till the originals take up at most 1/2 the edge length of the padded image, which preferrably should be a power of two. FFT both images in both directions, multiply them element-wise and iFFT back. The resulting image will encode how much the two images would agree for a given shift relative to one another. Simply find the maximum, and you know how to make them match.
Added text will cause no problems at all. This method will work best for large areas, like the company logo and gray background boxes. Thin lines will provide a poorer match, so in those cases you might have to blur the picture before doing the correlation, to broaden the features. You don't have to use the blurred image for further processing; once you know the offset you can return to the rotated but unblurred version.
Now you know both rotation and translation, and assumed no scaling or shearing, so you know exactly which portion of the template corresponds to which portion of the scan. Proceed.
If rotation is solved already, I'd just sum up all pixel color values horizontally and vertically to a single horizontal / vertical "line". This should provide clear spikes where you have horizontal and vertical lines in the form.
p.s. Generated a corresponding horizontal image with Gimp's scaling capabilities, attached below (it's a bit hard to see because it's only one pixel high and may get scaled down because it's > 700 px wide; the url is http://i.stack.imgur.com/Zy8zO.png ).

Recognizing distortions in a regular grid

To give you some background as to what I'm doing: I'm trying to quantitatively record variations in flow of a compressible fluid via image analysis. One way to do this is to exploit the fact that the index of refraction of the fluid is directly related to its density. If you set up some kind of image behind the flow, the distortion in the image due to refractive index changes throughout the fluid field leads you to a density gradient, which helps to characterize the flow pattern.
I have a set of routines that do this successfully with a regular 2D pattern of dots. The dot pattern is slightly distorted, and by comparing the position of the dots in the distorted image with that in the non-distorted image, I get a displacement field, which is exactly what I need. The problem with this method is resolution. The resolution is limited to the number of dots in the field, and I'm exploring methods that give me more data.
One idea I've had is to use a regular grid of horizontal and vertical lines. This image will distort the same way, but instead of getting only the displacement of a dot, I'll have the continuous distortion of a grid. It seems like there must be some standard algorithm or procedure to compare one geometric grid to another and infer some kind of displacement field. Nonetheless, I haven't found anything like this in my research.
Does anyone have some ideas that might point me in the right direction? FYI, I am not a computer scientist -- I'm an engineer. I say that only because there may be some obvious approach I'm neglecting due to coming from a different field. But I can program. I'm using MATLAB, but I can read Python, C/C++, etc.
Here are examples of the type of images I'm working with:
Regular: Distorted:
--------
I think you are looking for the Digital Image Correlation algorithm.
Here you can see a demo.
Here is a Matlab Implementation.
From Wikipedia:
Digital Image Correlation and Tracking (DIC/DDIT) is an optical method that employs tracking & image registration techniques for accurate 2D and 3D measurements of changes in images. This is often used to measure deformation (engineering), displacement, and strain, but it is widely applied in many areas of science and engineering.
Edit
Here I applied the DIC algorithm to your distorted image using Mathematica, showing the relative displacements.
Edit
You may also easily identify the maximum displacement zone:
Edit
After some work (quite a bit, frankly) you can come up to something like this, representing the "displacement field", showing clearly that you are dealing with a vortex:
(Darker and bigger arrows means more displacement (velocity))
Post me a comment if you are interested in the Mathematica code for this one. I think my code is not going to help anybody else, so I omit posting it.
I would also suggest a line tracking algorithm would work well.
Simply start at the first pixel line of the image and start following each of the vertical lines downwards (You just need to start this at the first line to get the starting points. This can be done by a simple pattern that moves orthogonally to the gradient of that line, ergo follows a line. When you reach a crossing of a horizontal line you can measure that point (in x,y coordinates) and compare it to the corresponding crossing point in your distorted image.
Since your grid is regular you know that the n'th measured crossing point on the m'th vertical black line are corresponding in both images. Then you simply compare both points by computing their distance. Do this for each line on your grid and you will get, by how far each crossing point of the grid is distorted.
This following a line algorithm is also used in basic Edge linking algorithms or the Canny Edge detector.
(All this are just theoretic ideas and I cannot provide you with an algorithm to it. But I guess it should work easily on distorted images like you have there... but maybe it is helpful for you)

Resources