Confidence interval with statsmodels is not displayed correctly - statsmodels

I would like to display the confidence intervals (not the predictions) on my linear regression model. I used the code that is given below.
confidence interval code
It seems to me that this code does not work when there are few points. How to get continuous lines at the intervals on the figure (the link is below)?
Thank you in advance for your answers.
Picture results : Intervalle de confiance régression linéaire

Related

Mode coming different than seen in distribution plot

I am trying to plot distribution of the values in my file. While the chart is looking fine, however the data looks negatively skewed and when i calculated skew it came as positive. I further analyzed data to see what is mean median mode to confirm the chart. And mode is not at the highest point of the distribution chart.
I am using
sns.distplot(AVG.loc[AVG['Month'] == 'May 2020','Asset Value Growth'],hist=True,label='May 2020')
Can someone tell me what am i doing wrong over here ?

How to plot gray scale of some data and their confidence interval

I would like to density plot of some vectors (some realizations in probabilistic concept) and also their confidence interval in Matlab, enter image description herelike the figure I attached. The grey shadows are the data and the dashed lines are the confidence interval. Any suggestions?
Thanks a lot.
This can be done easily using Gramm toolbox avaialbale at https://github.com/piermorel/gramm/blob/master/README.md
The link also gives a minimal working example as well as information on more advanced control of the figure.

MATLAB -Exponential (exp2) curve fitting function not giving the same output as the plot graph when using the fit values in the original equation

my brain is pickled with this one. Below are the two graphs I have plotted with exp2 function. The points do not match the curve, and this is ultimately changing my entire answer, as it is giving the wrong values out, and I cannot understand why?
enter image description here
enter image description here
Here is the code I am using, both graphs plot a concentration against time, but yet give different results:
CH4_fit = fit(Res_time, CH4_exp, 'exp2'); CH4_coeff =
coeffvalues(CH4_fit);
%Co-efficient values for exponential fitting CH4_pred
=(CH4_coeff(1)*exp(CH4_coeff(2)*Res_time)) + ...
(CH4_coeff(3)*exp(CH4_coeff(4)*Res_time)); plot(Res_time,CH4_exp, Res_time, CH4_pred);
Can I just added that the exact same data was run on different computers, and it gave the same equation co-efficients exactly (to 4.dp) and the same times, but yet still outputs different concentrations on my version? I have the R2018b, and I have just used default settings (don't know how to change anything, so I definitely haven't).

Stochastic Gradient Descent (Momentum) Formula Implementation C++

So I have an implementation for a neural network that I followed on Youtube. The guy uses SGD (Momentum) as an optimization algorithm and hyperbolic tangent as an activation function. I already changed the transfer function to Leaky ReLU (for the hidden layers) and Sigmoid (for the output layer).
But now I decided I should also change the optimization algorithm to Adam. And I ended up searching for SGD (Momentum) on Wikipedia for a deeper understanding of how it works and I noticed something's off. The formula the guy uses in the clip is different from the one on Wikipedia. And I'm not sure if that's a mistake, or not... The clip is one hour long, but I'm not asking you to watch the entire video, however I'm intrigued by the 54m37s mark and the Wikipedia formula, right here:
https://youtu.be/KkwX7FkLfug?t=54m37s
https://en.wikipedia.org/wiki/Stochastic_gradient_descent#Momentum
So if you take a look at the guy's implementation and then at the Wikipedia link for SGD (Momentum) formula, basically the only difference is in delta weight's calculation.
Wikipedia states that you subtract from the momentum multiplied by the old delta weight, the learning rate multiplied by the gradient and the output value of the neuron. Whereas in the tutorial, instead of subtracting the guy adds those together. However, the formula for the new weight is correct. It simply adds the delta weight to the old weight.
So my question is, did the guy in the tutorial make a mistake, or is there something I am missing? Because somehow, I trained a neural network and it behaves accordingly, so I can't really tell what the problem is here. Thanks in advance.
I have seen momentum implemented in different ways. Personally, I followed this guide in the end: http://ruder.io/optimizing-gradient-descent
There, momentum and weights are updated separately, which I think makes it clearer.
I do not know enought about the variables in the video, so I am not sure about that, but the wikipedia version is deffinetly correct.
In the video, the gradient*learning_rate gets added instead of subtracted, which is fine if you calculate and propagate your error accordingly.
Also, where in the video says "neuron_getOutputVal()*m_gradient", if it is as I think it is, that whole thing is considered the gradient. What I mean is that you have to multiplicate what you propagate times the outputs of your neurons to get the actual gradient.
For gradient descent without momentum, once you have your actual gradient, you multiply it with a learning rate and subtract (or add, depending on how you calculated and propagated the error, but usually subtract) it from your weights.
With momentum, you do it as it says in the wikipedia, using the last "change to your weights" or "delta weights" as part of your formula.

Kalman filter for car's tracking path

I am having a set of Points like Point(x,y). After the car gone through so many ways in the same road it is almost messing the resulting map. I heard that Kalman filter can make a sigle path from the available paths.
Can any body say how to make it? I am not from computer science. So please explain me about that concept and those matrices. Then I will code them. Please anybody enlighten me about the concept.
As far as I know is the Kalman filter capable to combine several sources of the same information to get a more precise measurement of the observed variable. It could be possible to combine also with the same measruement device measured multiple times.
Here is a good introduction:INTRO, AnotherOne
I don't know if this question is still active, but if your intressted in learning more about the Kalman filter I can strongly recomend this short matlab script. Even if you dont have matlab installed it should be about the simplest example on Kalman your likely to find!
I don't see how exactly a Kalman filter would be applied here.
I would approach this problem either by image processing, so a thick path would be reduced to a thin line or by successive linear regression on the path segments.
You're probably trying to use the detected coordinates of a car, to determine where a road is, when there is no roadmap information available. Trying to create a road when there is no road, right?
The Kalman Filter is meant to smoothen values obtained from a sensor. When a sensor detects a car, the sensor may not give the car's actual position. It will contain some errors in x and y coordinates.
You have to feed these x,y values to the Kalman filter while the data is being obtained from the sensor. Or at least in the correct order that it was obtained from the sensor.
The Kalman filter will give you the estimated values (smoothened values) of x and y positions, which will tell you approximately the correct position of the car.
Assuming that the car is travelling in the middle of the road, these estimated (filtered) x,y values are what you can take as the midpoints of the road.
I saw your question only now. I know it's late, but I hope that helped?

Resources