MATLAB -Exponential (exp2) curve fitting function not giving the same output as the plot graph when using the fit values in the original equation - curve-fitting

my brain is pickled with this one. Below are the two graphs I have plotted with exp2 function. The points do not match the curve, and this is ultimately changing my entire answer, as it is giving the wrong values out, and I cannot understand why?
enter image description here
enter image description here
Here is the code I am using, both graphs plot a concentration against time, but yet give different results:
CH4_fit = fit(Res_time, CH4_exp, 'exp2'); CH4_coeff =
coeffvalues(CH4_fit);
%Co-efficient values for exponential fitting CH4_pred
=(CH4_coeff(1)*exp(CH4_coeff(2)*Res_time)) + ...
(CH4_coeff(3)*exp(CH4_coeff(4)*Res_time)); plot(Res_time,CH4_exp, Res_time, CH4_pred);
Can I just added that the exact same data was run on different computers, and it gave the same equation co-efficients exactly (to 4.dp) and the same times, but yet still outputs different concentrations on my version? I have the R2018b, and I have just used default settings (don't know how to change anything, so I definitely haven't).

Related

Finding the position of edge defects of a circular object with MATLAB

I have a problem finding defects at the edge of a circular object. It's hard to describe so I have a picture which may help a bit. I am trying to find the red marked areas, such as what is shown below:
I already tried matching with templates vision.TemplateMatcher(), but this only works well for the picture I made the template of.
I tried to match it with vision.CascadeObjectDetector() and I trained it with 150 images. I found only < 5% correct results with this.
I also tried matching with detectSURFFeatures() and then matchFeatures(), but this only works on quite similar defects (when the edges are not closed it fails).
Since the defects are close to the half of a circle, I tried to find it with imfindcircles(), but there I find so many possible results. When I take the one with the highest metric sometimes I get the right one but not even close to 30%.
Do any of you have an idea what I can try to find at least more than 50%?
If someone has an idea and wants to try something I added another picture.
Since I am new I can only add two pictures but if you need more I can provide more pictures.
Are you going to detect rough edges like that on smooth binary overlays as you provided before? For eg. are you making a program whose input consists of getting a black image with lots of circles with rough edges which its then supposed to detect? i.e. sudden rough discontinuities in a normally very smooth region.
If the above position is valid, then this may be solved via classical signal processing. My opinion, plot a graph of the intensity on a line between any two points outside and inside the circle. It should look like
.. continuous constant ... continuous constant .. continuous constant.. DISCONTINUOUS VARYING!! DISCONTINUOUS VARYING!! DISCONTINUOUS VARYING!! ... continuous constant .. continuous constant..
Write your own function to detect these discontinuities.
OR
Gradient: The rate of change of certain quantities w.r.t a distance measure.
Use the very famous Sobel (gradient) filter.
Use the X axis version of the filter, See result, if gives you something detectable use it, do same for Y axis version of filter.
In case you're wondering, if you're using Matlab then you just need to get a readily available and highly mentioned 3x3 matrix (seen almost everywhere on the internet ) and plug it into the imfilter function, or use the in-built implementation (edge(image,'sobel')) (if you have the required toolbox).

What does RiBasis which is described in RenderMan mean?

I'm working on a plugin of 3ds Max. In this plugin, I export the geometry information into a .rib file which can be rendered by a RenderMan renderer. When I export a nubrs curve's data into .rib file described by RiBasis and RiCurve. I use the RtBsplineBasis in RiBasis, but I get the wrong result that the rendered curve is short than the result of 3ds Max's renderer. Then I reprint the first and the last control vertex, the curve is long enough, but its shape is a little different.Who can tell me how I get wrong result or what does RiBasis mean? How can get correct RiBasis? Thank u very much!
RiCurve draws a cubic spline. The control points do not uniquely determine the curve; you also need the basis, which is expressed as a 4x4 matrix -- one matrix give the coefficients you need for a B-spline, Bezier, Catmull-Rom, and so on, and of course you can also supply the matrix yourself for some kind of hybrid interpolant that isn't quite one of the standard 3 or 4. The basis determines the character of the spline -- whether the curve is guaranteed to go through the control points or is merely approximating, the degree of continuity, the "tension", and so on.
There is a great discussion in one of the appendices of "The RenderMan Companion," including numeric examples of how different basis matrices affect the interpolation.
It sounds like you requested a B-spline basis, which is approximating (not interpolating) and continuous in both 1st and 2nd derivatives. Maybe that's not what you had in mind. It's hard to tell, since you didn't describe the properties of the spline that you were hoping for.
As an aside, approximating an arbitrary NURBS curve with a nonrational cubic is not always going to give you an exact match. Something else to keep in mind.

How to detect similar color regions in an image?

I have created a simple program to generate random images, giving random colors to each pixel. I know that there is a very low chance of generating a reconocible image but I would like to try.
I have observed that the longest part of the work is to check if the images are really something. I also observed that most of the images produced are just fields of colorful images with lots of individual pixels. That's why I would like to ask for an algorithm in pseudocode to detect similar color regions in an image. I think that the easiest way to find meaningful images is to filter all those random pixel images. It's not perfect but I think it will help.
If someone could propose another kind of filtering algorithm that would help with this task I would also apreciate it.
(edited)
To clarify this, in case my explanation was not clear enough, I will show you some images:
This is the kind of images I'm getting, basically I would describe it as "Colorful noise". As you can see, all the pixels are spread individually without grouping in similar color regions to hopfully create shapes of objects or anything reconocible as something.
In here you can see a conventional image, a "reconocible" picture. We can clearly see a dog lying on the grass with a tennis ball. If you observe carefully this picture it can be clearly distinguished from the other one because it has agrupations of similar colors which we can difer (as the dog, a white region, the grass, a dark green region, and the tenis ball, a light green region).
What I exactly want is to remove the "pixelly" images before saving them in the HD and only save the ones with color agrupations. As I said before, this idea is the best I had to filter these randomly generated images but if someone proposes another more efficient way I would really apreciate it.
(edited)
Ok, I think that this post is becoming too long... Well if someone want's to have a look here is the code of the program I wrote. It's really straightforward. I've program it in Python using Pygame. I know that this isn't nearly the most efficient way to do it, I'm aware of that. The thing is that I'm quite a noob in this field and I don't really know another way to do this in other languages or modules. Maybe you could help me also with this... I don't know, maybe translate the code to C++? I'm feeling that I'm asking for to many questions in the same post but, as I sayd tons of times, any help would be greatly apreciated.
import pygame, random
pygame.init()
#lots of printed text statements here
imageX = int(input("Enter the widht of the image you want to produce: "))
imageY = int(input("Enter the height of the image you want to produce: "))
maxImages = int(input("Enter the maximun image amoungt you want to produce: "))
maxMem = int(input("Enter the maximun data you want to produce (MB, only works with 800x600 images): "))
maxPPS = int(input("Enter the maximun image amoungt you want to produce each second: "))
firstSeed = int(input("Enter the first seed you want to use: "))
print("\n\n\n\n")
seed = firstSeed
clock = pygame.time.Clock()
images = 0
keepGoing = True
while keepGoing:
#seed
random.seed(seed)
#PPS
clock.tick(maxPPS)
#surface
image = pygame.Surface((imageX,imageY))
#generation
for x in range(imageX):
for y in range(imageY):
red = random.randint(0,255)
green = random.randint(0,255)
blue = random.randint(0,255)
image.set_at((x,y),(red,green,blue))
#save
pygame.image.save(image,str(seed)+".png")
#update parameters
seed += 1
images += 1
#print seed
print(seed - 1)
#check end
if images >= maxImages:
keepGoing = False
elif (images * 1.37) >= maxMem:
keepGoing = False
pygame.event.pump()
print("\n\nThis is the last seed that was used: " + str(seed - 1))
input("\nPress Enter to exit")
Here is a butchered algorithm for you to try (try it in OpenCV):
Get image
Work with just one color dimension of the image i.e. Red or Green... or Gray ...or do the following for each separately
Sum up all the values in the image and save this as the "energy" value of the image
Use OpenCV's Smooth function to blur the image
The trick to blurring the image correctly is to choose the size of the kernel (aka filter) to be smaller than the width of important features and larger than the unimportant or noisy features. The size is controlled by defining param1 and param2.
See http://opencv.willowgarage.com/documentation/python/imgproc_image_filtering.html
Now for the output, sum up all the values to get the output "energy"
Keep image if the output has at least half of the energy of the input. Technically the trick in number 5 is the same as choosing 50 percent as the threshold to keep or discard images. So changing the threshold here is approximately the same as changing the filter size.
Optional. No need to think too much about it though, just get these energy values for some set of the images and choose the threshold by eye.
What is happening?
Your filtering out high frequencies then seeing if there is still something left over. Most images have lots of energy at lower spatial frequencies. In fact jpeg compression uses this fact to compress images. The filter must have an energy of one to work correctly, so I'm assuming that this is true.
Hope this helps!
The simplest way of filtering out noise is to look for correlation. Nearby regions should be highly correlated in most of the image. There are so many ways to do it.
You should use a combination of the following and do some tweaking to find parameters to get acceptable hit/miss ratio
Color correlation: You will find huge amount of correlation in U/V in nearby regions in "proper" images.
Edge detection: Natural images tend to have well defined edges. easiest way to detect noise from natural images is to do this.
Quite bit more can be done: Frequency analysis: Noisy images will have all frequencies natural images have huge peaks usually. scale space analysis etc depending on how complex you want to get.. how much hit ratio you are willing to tolerate. In general trying to get recognizable images is an open ended topic but you should be able to get very high hit ratio if you specifically wanting to remove out noise images like the one you gave in the example.
EDIT:
In general there is no exact algorithms for problems like this. You have to make assumptions about properties of underlying data. Then use basic primitives (correlation, frequency domain data, edges etc) and combine it to give your algorithm for solving the problem. This is because the solution to problems like this is very data specific. Quite different than solving say Computer science algorithms. This is not to say that signal processing algorithms don't have exactness. However your current problem and many others deal with what is known as Random Variables and Stochastic Processes. You may have to search if someone has tried to solve this problem in literature or at some university. You can use that as your starting point. Tweak that algorithm to suite you. However you are not going to get a solution easily unless you take some time to understand some of the things I mentioned and are willing to do some experiments and emperical analysis.
Without knowing exactly what you're trying to achieve, it's difficult to offer specific help. But, reading your account did remind me of something I saw recently which, whilst quite different in implementation, has a similar end goal: generate a recognisable image from randomness.
Check out https://github.com/phl/pareidoloop by Philip McCarthy.
Philip's project starts with random polygons and the algorithm favours face like images. Two key points here: the polygons significantly reduce the amount of random noise right off the bat so the chances of generating something recognisable are significantly increased. Secondly, the algorithm favours a specific type of recognisable image: I suspect you're going to have to work towards a specific type of image so that you have some parameters with which to computationally estimate "recognisability".
hth!

Recognizing distortions in a regular grid

To give you some background as to what I'm doing: I'm trying to quantitatively record variations in flow of a compressible fluid via image analysis. One way to do this is to exploit the fact that the index of refraction of the fluid is directly related to its density. If you set up some kind of image behind the flow, the distortion in the image due to refractive index changes throughout the fluid field leads you to a density gradient, which helps to characterize the flow pattern.
I have a set of routines that do this successfully with a regular 2D pattern of dots. The dot pattern is slightly distorted, and by comparing the position of the dots in the distorted image with that in the non-distorted image, I get a displacement field, which is exactly what I need. The problem with this method is resolution. The resolution is limited to the number of dots in the field, and I'm exploring methods that give me more data.
One idea I've had is to use a regular grid of horizontal and vertical lines. This image will distort the same way, but instead of getting only the displacement of a dot, I'll have the continuous distortion of a grid. It seems like there must be some standard algorithm or procedure to compare one geometric grid to another and infer some kind of displacement field. Nonetheless, I haven't found anything like this in my research.
Does anyone have some ideas that might point me in the right direction? FYI, I am not a computer scientist -- I'm an engineer. I say that only because there may be some obvious approach I'm neglecting due to coming from a different field. But I can program. I'm using MATLAB, but I can read Python, C/C++, etc.
Here are examples of the type of images I'm working with:
Regular: Distorted:
--------
I think you are looking for the Digital Image Correlation algorithm.
Here you can see a demo.
Here is a Matlab Implementation.
From Wikipedia:
Digital Image Correlation and Tracking (DIC/DDIT) is an optical method that employs tracking & image registration techniques for accurate 2D and 3D measurements of changes in images. This is often used to measure deformation (engineering), displacement, and strain, but it is widely applied in many areas of science and engineering.
Edit
Here I applied the DIC algorithm to your distorted image using Mathematica, showing the relative displacements.
Edit
You may also easily identify the maximum displacement zone:
Edit
After some work (quite a bit, frankly) you can come up to something like this, representing the "displacement field", showing clearly that you are dealing with a vortex:
(Darker and bigger arrows means more displacement (velocity))
Post me a comment if you are interested in the Mathematica code for this one. I think my code is not going to help anybody else, so I omit posting it.
I would also suggest a line tracking algorithm would work well.
Simply start at the first pixel line of the image and start following each of the vertical lines downwards (You just need to start this at the first line to get the starting points. This can be done by a simple pattern that moves orthogonally to the gradient of that line, ergo follows a line. When you reach a crossing of a horizontal line you can measure that point (in x,y coordinates) and compare it to the corresponding crossing point in your distorted image.
Since your grid is regular you know that the n'th measured crossing point on the m'th vertical black line are corresponding in both images. Then you simply compare both points by computing their distance. Do this for each line on your grid and you will get, by how far each crossing point of the grid is distorted.
This following a line algorithm is also used in basic Edge linking algorithms or the Canny Edge detector.
(All this are just theoretic ideas and I cannot provide you with an algorithm to it. But I guess it should work easily on distorted images like you have there... but maybe it is helpful for you)

Algorithm for following the path of ridges on a 3D image

I'm trying to find an algorithm (or algorithm ideas) for following a ridge on a 3D image, derived from a digital elevation model (DEM). I've managed to get very basic program working which just iterates across each row of the image marking a ridge line wherever it finds a large change in aspect (ie. from < 180 degrees to > 180 degrees).
However, the lines this produces aren't brilliant, there are often gaps and various strange artefacts. I'm hoping to try and extend this by using some sort of algorithm to follow the ridge lines, thus producing lines that are complete (that is, no gaps) and more accurate.
A number of people have mentioned snake algorithms to me, but they don't seem to be quite what I'm looking for. I've also done a lot of searching about path-finding algorithms, but again, they don't seem to be quite the right thing.
Does anyone have any suggestions for types or algorithms or specific algorithms I should look at?
Update: I've been asked to add some more detail on the exact area I'll be applying this to. It's working with gridded elevation data of sand dunes. I'm trying to extract the crests if these sand dunes, which look similar to the boundaries between drainage basins, but can be far more complex (for example, there can be multiple sand dunes very close to each other with gradually merging crests)
You can get a good estimate of the ridges using sign changes of the curvature. Note that the curvature will be near infinity at flat regions. Hence possible psuedo-code for a ridge detection algorithm could be:
for each face in the mesh
compute 1/curvature
if abs(1/curvature) != zeroTolerance
flag face as ridge
else
continue
(zeroTolerance is a number near but not equal to zero e.g. 0.003 etc)
Also Meshlab provides a module for normal & curvature estimation on most formats. You can test the idea using it, before you code it up.
I don't know how what your data is like or how much automation you need. This won't work if if consists of peaks without clear ridges (but then you probably wouldn't be asking the question.)
startPoint = highest point in DEM (or on ridge)
curPoint = startPoint;
line += curPoint;
Loop
curPoint = highest point adjacent to curPoint not in line; // (Don't backtrack)
line += point;
Repeat
Curious what the real solution turns out to be.
Edited to add: depending on the coarseness of your data set, 'point' can be a single point or a smoothed average of a local region of points.
http://en.wikipedia.org/wiki/Ridge_detection
You can treat the elevation as you would a grayscale color, then use a 2D edge recognition filter. There are lots of edge recognition methods available. The best would depend on your specific needs.

Resources