I want to shade the area under the density curve for the standard normal distribution by the following ranges:
1) mean-2std , mean-std ---> in red
2) mean +std , mean+2std ---> in red
3) mean -std , mean+st --> in blue
This is a variant on the questions "Shade (fill or color) area under density curve by quantile".
the data used to draw the denisty curve is taken from a column of a dataframe.
eg: This is only part of the data. The column has 256 values.
Gap
1 -3.260010
2 -7.790009
3 -1.179993
4 2.270019
5 9.000000
6 -4.930023
7 -7.920014
To draw the plot I did the following code:
sns.kdeplot(TeslaStock18_19['Gap'], label = 'Gap Density', color = 'darkblue')
Considering all the data, I found out that the distribution is normal. This allows me to use the Empricial rule (68-95) to make some statitical consideraton.
What I would like to obtain is the following plot:
https://www.nku.edu/~statistics/images/Using_1.gif
N.B. I am starting to use Python, It is for a Univeristy project.
This is what I tried to do but it does not fill me completely the area
ptx = np.linspace(meanGap-std, meanGap+std) pty = scipy.stats.norm.pdf(ptx,meanGap,stdGap) plt.fill_between(ptx, pty, color='#0b559f', alpha='0.35')
Related
I am using an ImageDataGenerator to augment my images. I need to get the y labels from the generator.
Example : I have 10 training images, 7 are label 0 and 3 are label 1. I want to increase training set size to 100.
total_training_images = 100
total_val_images = 50
model.fit_generator(
train_generator,
steps_per_epoch= total_training_images // batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps= total_val_images // batch_size)
By my understanding, this trains a model on 100 training images for each epoch, with each image being augmented in some way or the other according to my data generator, and then validates on 50 images.
If I do train_generator.classes, I get an output [0,0,0,0,0,0,0,1,1,1]. This corresponds to my 7 images of label 0 and 3 images of label 1.
For these new 100 images, how do I get the y-labels?
Does this mean when I am augmenting this to 100 images, my new train_generator labels are the same thing, but repeated 10 times? Essentially np.append(train_generator.classes) 10 times?
I am following this tutorial, if that helps :
https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html
The labels generate as one-hot-encoding with the images.Hope this helps !
training_set.class_indices
from keras.preprocessing import image
import matplotlib.pyplot as plt
x,y = train_generator.next()
for i in range(0,3):
image = x[i]
label = y[i]
print (label)
plt.imshow(image)
plt.show()
Based on what you're saying about the generator, yes.
It will replicate the same label for each augmented image. (Otherwise the model would not train properly).
One simple way to check what the generator is outputting is to get what it yields:
X,Y = train_generator.next() #or next(train_generator)
Just remember that this will place the generator in a position to yield the second element, not the first anymore. (This would make the fit method start from the second element).
I have 2 images that I need to plot in one figure then display the points of interest found with SURF on both images:
Image 1 : size [6113x5693x3]
Image 2 : size [4896x3744x3]
when trying to plot both of images in one figure with this code:
I = zeros([size(I1,1) size(I1,2)*2 size(I1,3)]);
I(:,1:size(I1,2),:)=I1;
I(:,size(I1,2)+1:size(I1,2)+size(I2,2),:)=I2;
figure, imshow(I); hold on;
and to display the points of interest of each one of them with :
plot([Pos1(:,2) Pos2(:,2)+size(I1,2)]',[Pos1(:,1) Pos2(:,1)]','-');
plot([Pos1(:,2) Pos2(:,2)+size(I1,2)]',[Pos1(:,1) Pos2(:,1)]','o');
I get this error and I dont know how to fixed :
Subscripted assignment dimension mismatch.
Any suggestion will be welcome!
Walk through this line by line. The error occurs on line 3. You are trying to assign I2 (with dimension 4896x3744x3) to a select part of I that has an incorrect first dimension (since the first dimension of I is the same as I1, not I2).
size(I(:,size(I1,2)+1:size(I1,2)+size(I2,2),:)) = [ 6113 3744 3 ]
size(I2) = [4896 3744 3]
I want to read in the right ascension (in hour angles), declination (in degrees) and size (in arcmin) of a catalogue of galaxies and draw all of them in a large image of specified pixel size.
I tried converting the ra, dec and size into pixels to create a Bounds object for each galaxy, but get an error that "BoundsI must be initialized with integer values." I understand that pixels have to be integers...
But is there a way to center the large image at a specified ra and dec, then input the ra and dec of each galaxy as parameters to draw it in?
Thank you in advance!
GalSim uses the CelestialCoord class to handle coordinates in the sky and any of a number of WCS classes to handle the conversion from pixels to celestial coordinates.
The two demos in the tutorial series that use a CelestialWCS (the base class for WCS classes that use celestial coordinates for their world coordinate system) are demo11 and demo13. So you might want to take a look at them. However, neither one does something very close to what you're doing.
So here's a script that more or less does what you described.
import galsim
import numpy
# Make some random input data so we can run this.
# You would use values from your input catalog.
ngal = 20
numpy.random.seed(123)
ra = 15 + 0.02*numpy.random.random( (ngal) ) # hours
dec = -34 + 0.3*numpy.random.random( (ngal) ) # degrees
size = 0.1 * numpy.random.random( (ngal) ) # arcmin
e1 = 0.5 * numpy.random.random( (ngal) ) - 0.25
e2 = 0.5 * numpy.random.random( (ngal) ) - 0.25
# arcsec is usually the more natural units for sizes, so let's
# convert to that here to make things simpler later.
# There are options throughout GalSim to do things in different
# units, such as arcmin, but arcsec is the default, so it will
# be simpler if we don't have to worry about that.
size *= 60 # size now in arcsec
# Some plausible location at which to center the image.
# Note that we are now attaching the right units to these
# so GalSim knows what angle they correspond to.
cen_ra = numpy.mean(ra) * galsim.hours
cen_dec = numpy.mean(dec) * galsim.degrees
# GalSim uses CelestialCoord to handle celestial coordinates.
# It knows how to do all the correct spherical geometry calculations.
cen_coord = galsim.CelestialCoord(cen_ra, cen_dec)
print 'cen_coord = ',cen_coord.ra.hms(), cen_coord.dec.dms()
# Define some reasonable pixel size.
pixel_scale = 0.4 # arcsec / pixel
# Make the full image of some size.
# Powers of two are typical, but not required.
image_size = 2048
image = galsim.Image(image_size, image_size)
# Define the WCS we'll use to connect pixels to celestial coords.
# For real data, this would usually be read from the FITS header.
# Here, we'll need to make our own. The simplest one that properly
# handles celestial coordinates is TanWCS. It first goes from
# pixels to a local tangent plane using a linear affine transformation.
# Then it projects that tangent plane into the spherical sky coordinates.
# In our case, we can just let the affine transformation be a uniform
# square pixel grid with its origin at the center of the image.
affine_wcs = galsim.PixelScale(pixel_scale).affine().withOrigin(image.center())
wcs = galsim.TanWCS(affine_wcs, world_origin=cen_coord)
image.wcs = wcs # Tell the image to use this WCS
for i in range(ngal):
# Get the celestial coord of the galaxy
coord = galsim.CelestialCoord(ra[i]*galsim.hours, dec[i]*galsim.degrees)
print 'gal coord = ',coord.ra.hms(), coord.dec.dms()
# Where is it in the image?
image_pos = wcs.toImage(coord)
print 'position in image = ',image_pos
# Make some model of the galaxy.
flux = size[i]**2 * 1000 # Make bigger things brighter...
gal = galsim.Exponential(half_light_radius=size[i], flux=flux)
gal = gal.shear(e1=e1[i],e2=e2[i])
# Pull out a cutout around where we want the galaxy to be.
# The bounds needs to be in integers.
# The fractional part of the position will go into offset when we draw.
ix = int(image_pos.x)
iy = int(image_pos.y)
bounds = galsim.BoundsI(ix-64, ix+64, iy-64, iy+64)
# This might be (partially) off the full image, so get the overlap region.
bounds = bounds & image.bounds
if not bounds.isDefined():
print ' This galaxy is completely off the image.'
continue
# This is the portion of the full image where we will draw. If you try to
# draw onto the full image, it will use a lot of memory, but if you go too
# small, you might see artifacts at the edges. You might need to
# experiment a bit with what is a good size cutout.
sub_image = image[bounds]
# Draw the galaxy.
# GalSim by default will center the object at the "true center" of the
# image. We actually want it centered at image_pos, so provide the
# difference as the offset parameter.
# Also, the default is to overwrite the image. But we want to add to
# the existing image in case galaxies overlap. Hence add_to_image=True
gal.drawImage(image=sub_image, offset=image_pos - sub_image.trueCenter(),
add_to_image=True)
# Probably want to add a little noise...
image.addNoise(galsim.GaussianNoise(sigma=0.5))
# Write to a file.
image.write('output.fits')
GalSim deals with image bounds and locations using image coordinates. The way to connect true positions on the sky (RA, dec) into image coordinates is using the World Coordinate System (WCS) functionality in GalSim. I gather from your description that there is a simple mapping from RA/dec into pixel coordinates (i.e., there are no distortions).
So basically, you would set up a simple WCS defining the (RA, dec) center of the big image and its pixel scale. Then for a given galaxy (RA, dec), you can use the "toImage" method of the WCS to figure out where on the big image the galaxy should live. Any subimage bounds can be constructed using that information.
For a simple example with a trivial world coordinate system, you can check out demo10 in the GalSim repository.
I have a file with three columns. All three have different values. To plot it in a
smooth surface with a color gradient for third column what should I do? First two columns are pseudo randomly distributed. And so do the final column.
The data file looks like this:
8.4295190 0.3860565 0.3706621
-2.9886350 -0.1156874 -0.1314160
8.4375611 0.2617630 0.3710158
8.4092863 0.3195774 0.3697725
8.4237288 0.3930579 0.3704075
-1.1439280 -0.7286996 -0.0919299
-1.0866221 -0.9426172 -0.0873246
-0.9633012 -0.8667140 -0.0774141
-0.8225506 -0.6229306 -0.0661029
-0.9931836 -0.6562048 -0.0798155
-1.3138121 -0.8559578 -0.1055823
-0.8687813 -0.7689202 -0.0698182
7.3637155 1.8145656 0.1891778
7.4434600 1.9952866 0.1912265
7.5885025 1.8936264 0.1949527
7.3067197 1.8313323 0.1877136
7.5324886 2.0066328 0.1935137
You could use dgrid3d to turn your points into grid data:
set dgrid3d 32,32
set xyplane at 0
splot 'data' with pm3d
This creates a grid with 32 rows and 32 columns from your data.
You can increase the number of grid points to get a smoother surface and you may also want to use set pm3d interpolate 0,0, which means that the optimal smoothing is applied to the surface.
Can someone provide some insight on how scales and extents work together in cubism.js
.call(context.horizon()
.extent([-100, 100])
.scale(d3.scale.linear().domain([-10,10]).range([-100,100])
)
);
For example what does the code above do? If the values are generated using a random number generator (numbers between -10 and 10)
I know extent is used to set the maximum and minimum.
I know how to define a scale, example:
var scale = d3.scale.threshold().domain([100]).range([0,100])
console.log(scale(1)) // returns 0
console.log(scale(99.9)) // returns 0
console.log(scale(88.9)) // returns 0
console.log(scale(100)) // returns 100
I read about d3.scales here http://alignedleft.com/tutorials/d3/scales/
My main issue is that I want to define thresholds for my data, very simple
0-98 Red
98-100 Pink
100 Blue
Or maybe just
0-99.99 Red
100 Blue
But I'm not being able to use all what I've read to construct something that works.
I'm guessing that you just want to use a different color to represent anomalies in your data. If that is true, you don't need to create a domain and range.
You can just create a custom color palette like this:
var custom_colors = ['#ef3b2c', '#084594', '#2171b5', '#4292c6', '#6baed6', '#9ecae1', '#c6dbef', '#deebf7', '#f7fbff', '#f7fcf5', '#e5f5e0', '#c7e9c0', '#a1d99b', '#74c476', '#41ab5d', '#238b45', '#006d2c', '#00441b'];
This color palette was constructed using the palette on this page with an extra red color tacked on to the end.
Then just call the custom colors like this:
d3.select("#testdiv")
.selectAll(".horizon")
...
.call(context.horizon()
.colors(custom_colors)
));
Play around with the colors until you find a combination that you like. In this above example, only the outlier will be in red while the rest will follow the blue and green pattern.
Hope this helps!