Translating right ascension and declination onto image - galsim

I want to read in the right ascension (in hour angles), declination (in degrees) and size (in arcmin) of a catalogue of galaxies and draw all of them in a large image of specified pixel size.
I tried converting the ra, dec and size into pixels to create a Bounds object for each galaxy, but get an error that "BoundsI must be initialized with integer values." I understand that pixels have to be integers...
But is there a way to center the large image at a specified ra and dec, then input the ra and dec of each galaxy as parameters to draw it in?
Thank you in advance!

GalSim uses the CelestialCoord class to handle coordinates in the sky and any of a number of WCS classes to handle the conversion from pixels to celestial coordinates.
The two demos in the tutorial series that use a CelestialWCS (the base class for WCS classes that use celestial coordinates for their world coordinate system) are demo11 and demo13. So you might want to take a look at them. However, neither one does something very close to what you're doing.
So here's a script that more or less does what you described.
import galsim
import numpy
# Make some random input data so we can run this.
# You would use values from your input catalog.
ngal = 20
numpy.random.seed(123)
ra = 15 + 0.02*numpy.random.random( (ngal) ) # hours
dec = -34 + 0.3*numpy.random.random( (ngal) ) # degrees
size = 0.1 * numpy.random.random( (ngal) ) # arcmin
e1 = 0.5 * numpy.random.random( (ngal) ) - 0.25
e2 = 0.5 * numpy.random.random( (ngal) ) - 0.25
# arcsec is usually the more natural units for sizes, so let's
# convert to that here to make things simpler later.
# There are options throughout GalSim to do things in different
# units, such as arcmin, but arcsec is the default, so it will
# be simpler if we don't have to worry about that.
size *= 60 # size now in arcsec
# Some plausible location at which to center the image.
# Note that we are now attaching the right units to these
# so GalSim knows what angle they correspond to.
cen_ra = numpy.mean(ra) * galsim.hours
cen_dec = numpy.mean(dec) * galsim.degrees
# GalSim uses CelestialCoord to handle celestial coordinates.
# It knows how to do all the correct spherical geometry calculations.
cen_coord = galsim.CelestialCoord(cen_ra, cen_dec)
print 'cen_coord = ',cen_coord.ra.hms(), cen_coord.dec.dms()
# Define some reasonable pixel size.
pixel_scale = 0.4 # arcsec / pixel
# Make the full image of some size.
# Powers of two are typical, but not required.
image_size = 2048
image = galsim.Image(image_size, image_size)
# Define the WCS we'll use to connect pixels to celestial coords.
# For real data, this would usually be read from the FITS header.
# Here, we'll need to make our own. The simplest one that properly
# handles celestial coordinates is TanWCS. It first goes from
# pixels to a local tangent plane using a linear affine transformation.
# Then it projects that tangent plane into the spherical sky coordinates.
# In our case, we can just let the affine transformation be a uniform
# square pixel grid with its origin at the center of the image.
affine_wcs = galsim.PixelScale(pixel_scale).affine().withOrigin(image.center())
wcs = galsim.TanWCS(affine_wcs, world_origin=cen_coord)
image.wcs = wcs # Tell the image to use this WCS
for i in range(ngal):
# Get the celestial coord of the galaxy
coord = galsim.CelestialCoord(ra[i]*galsim.hours, dec[i]*galsim.degrees)
print 'gal coord = ',coord.ra.hms(), coord.dec.dms()
# Where is it in the image?
image_pos = wcs.toImage(coord)
print 'position in image = ',image_pos
# Make some model of the galaxy.
flux = size[i]**2 * 1000 # Make bigger things brighter...
gal = galsim.Exponential(half_light_radius=size[i], flux=flux)
gal = gal.shear(e1=e1[i],e2=e2[i])
# Pull out a cutout around where we want the galaxy to be.
# The bounds needs to be in integers.
# The fractional part of the position will go into offset when we draw.
ix = int(image_pos.x)
iy = int(image_pos.y)
bounds = galsim.BoundsI(ix-64, ix+64, iy-64, iy+64)
# This might be (partially) off the full image, so get the overlap region.
bounds = bounds & image.bounds
if not bounds.isDefined():
print ' This galaxy is completely off the image.'
continue
# This is the portion of the full image where we will draw. If you try to
# draw onto the full image, it will use a lot of memory, but if you go too
# small, you might see artifacts at the edges. You might need to
# experiment a bit with what is a good size cutout.
sub_image = image[bounds]
# Draw the galaxy.
# GalSim by default will center the object at the "true center" of the
# image. We actually want it centered at image_pos, so provide the
# difference as the offset parameter.
# Also, the default is to overwrite the image. But we want to add to
# the existing image in case galaxies overlap. Hence add_to_image=True
gal.drawImage(image=sub_image, offset=image_pos - sub_image.trueCenter(),
add_to_image=True)
# Probably want to add a little noise...
image.addNoise(galsim.GaussianNoise(sigma=0.5))
# Write to a file.
image.write('output.fits')

GalSim deals with image bounds and locations using image coordinates. The way to connect true positions on the sky (RA, dec) into image coordinates is using the World Coordinate System (WCS) functionality in GalSim. I gather from your description that there is a simple mapping from RA/dec into pixel coordinates (i.e., there are no distortions).
So basically, you would set up a simple WCS defining the (RA, dec) center of the big image and its pixel scale. Then for a given galaxy (RA, dec), you can use the "toImage" method of the WCS to figure out where on the big image the galaxy should live. Any subimage bounds can be constructed using that information.
For a simple example with a trivial world coordinate system, you can check out demo10 in the GalSim repository.

Related

Saving adversarial samples into images and loading it back, but it fails attack

I am testing the adversarial sample attack using deepfool and sparsefool on mnist dataset. It did an attack on the preprocessed image data. However, when I save it into an image and then load it back, it fails attack.
I have test it using sparsefool and deepfool, and I think there are some precision problems when I save it into images. But I cannot figure it out how to implement it correctly.
if __name__ == "__main__":
# pic_path = 'testSample/img_13.jpg'
pic_path = "./hacked.jpg"
model_file = './trained/'
image = Image.open(pic_path)
image_array = np.array(image)
# print(np.shape(image_array)) # 28*28
shape = (28, 28, 1)
projection = (0, 1)
image_norm = tf.cast(image_array / 255.0 - 0.5, tf.float32)
image_norm = np.reshape(image_norm, shape) # 28*28*1
image_norm = image_norm[tf.newaxis, ...] # 1*28*28*1
model = tf.saved_model.load(model_file)
print(np.argmax(model(image_norm)), "nnn")
# fool_img, r, pred_label, fool_label, loops = SparseFool(
# image_norm, projection, model)
print("pred_label", pred_label)
print("fool_label", np.argmax(model(fool_img)))
pert_image = np.reshape(fool_img, (28, 28))
# print(pert_image)
pert_image = np.copy(pert_image)
# np.savetxt("pert_image.txt", (pert_image + 0.5) * 255)
pert_image += 0.5
pert_image *= 255.
# shape = (28, 28, 1)
# projection = (0, 1)
# pert_image = tf.cast(((pert_image - 0.5) / 255.), tf.float32)
# image_norm = np.reshape(pert_image, shape) # 28*28*1
# image_norm = image_norm[tf.newaxis, ...] # 1*28*28*1
# print(np.argmax(model(image_norm)), "ffffnnn")
png = Image.fromarray(pert_image.astype(np.uint8))
png.save("./hacked.jpg")
It should attack 4 to 9, however, the saved image is still predicted into 4.
The full code project is shared on
https://drive.google.com/open?id=132_SosfQAET3c4FQ2I1RS3wXsT_4W5Mw
Based on my research and also this paper as reference https://arxiv.org/abs/1607.02533
You can see in real life when you converted to images, all of the adversarial attack samples generated from attack will not work on in real world. it can explain as below "This could be explained by the fact that iterative methods exploit more subtle kind of
perturbations, and these subtle perturbations are more likely to be destroyed by photo transformation"
As example, your clean image has 127,200,55,..... you dividing into 255 (as it is 8bit png) and sending to you ML as (0.4980,0.78431,0.2156,...) . And deepfool is advanced attack method it added small perturb and change it to (0.4981,0.7841,0.2155...). Now this is adversarial sample which can fool your ML. but if you try to save it to 8bit png you will get again 127,200,55.. as you will multiply it by 255. So adversarial information is lost.
Simple put, you use deep fool method it added some perturb so small which essential not possible in real world 8bit png.

Auto-brightening images

I found this code for auto-brightening images to an optimum level.
% AUTOBRIGHTNESS
% -->Automatically adjusts brightness of images to optimum level.
% e.g. autobrightness('Sunset.jpg','Output.jpg')
function autobrightness(input_img,output_img)
my_limit = 0.5;
input_image=imread(input_img);
if size(input_image,3)==3
a=rgb2ntsc(input_image);
else
a=double(input_image)./255;
end
mean_adjustment = my_limit-mean(mean(a(:,:,1)));
a(:,:,1) = a(:,:,1) + mean_adjustment*(1-a(:,:,1));
if size(input_image,3)==3
a=ntsc2rgb(a);
end
imwrite(uint8(a.*255),output_img);
I want to ask, why the value of my_limit is 0.5?
How we determine that value?
Why use the 'ntsc' colorspace instead of another colorspace like hsv, lab or yCbCr?
I want to ask, why the value of my_limit is 0.5? How we determine that
value?
The color space NTSC ranges from 0 to 1 for each of its channel. So essentially 0.5 is the center. This is equivalent of choosing 127 for RGB space
Why use the 'ntsc' colorspace instead of another colorspace like hsv,
lab or yCbCr?
I believe ntsc provides 100% coverage of the color space and so the author of the code choose it. However most modern systems wont display in this color space and hence we use standard RGB for display. I used this website to come to this conclusion NTSC color space
Also, as pointed by Cris in this wikipedia page. NTSC stores Luminance and Chrominance and the author of the code is adjusting the Lumiance(brightness). I am including a modified script I used to come to these conclusions
input_img='lena_std.tif'
output_img='lena2.tif'
my_limit = 0.5;
input_image=imread(input_img);
if size(input_image,3)==3
a=rgb2ntsc(input_image);
k=rgb2ntsc(input_image);
else
a=double(input_image)./255;
end
mean_adjustment = my_limit-mean(mean(a(:,:,1)));
a(:,:,1) = a(:,:,1) + mean_adjustment*(1-a(:,:,1));
if size(input_image,3)==3
a=ntsc2rgb(a);
end
imwrite(uint8(a.*255),output_img);
output=uint8(a.*255);
imwrite(uint8(k.*255),'test.tif');
ntscoutput=uint8(k.*255);

Motion History Image (MHI) in Matlab

My project is to detect human activity through stored video clips.
I am successfully able to do the following:
Get the Motion History Image (MHI) from a video using OpenCV
Train and classify the set of images using Matlab
However, I want to use Matlab in order to get the Motion History Image (MHI). Is it possible, and if yes can someone guide me? Thank you.
I have attached a sample Motion History Image (MHI)
I have used the following code for MHI:
http://www.ece.iastate.edu/~alexs/classes/2007_Fall_401/code/09_MotionHistory/motempl.c
MHI is just a ways of implementing motion detection (and uses silhouettes as the basis of it).
Let suppose that the silhouette of the most recent object has been created. It also uses a timestamp to identify if the current silhouette is recent or not. The older silhouettes have to be compared with the current silhouette in order to achieve movement detection. Hence, earlier silhouettes are also saved in the image, with an earlier timestamp.
MHI describes the changes of some moving objects over the image sequence. Basically, you should only maintain an image where every pixel encodes a time information - whether the silhouette is recent or not or where the movement occurs at a given time.
Therefore the implementation of MHI is very simple e.g.:
function MHI = MHI(fg)
% Initialize the output, MHI a.k.a. H(x,y,t,T)
MHI = fg;
% Define MHI parameter T
T = 15; % # of frames being considered; maximal value of MHI.
% Load the first frame
frame1 = fg{1};
% Get dimensions of the frames
[y_max x_max] = size(frame1);
% Compute H(x,y,1,T) (the first MHI)
MHI{1} = fg{1} .* T;
% Start global loop for each frame
for frameIndex = 2:length(fg)
%Load current frame from image cell
frame = fg{frameIndex};
% Begin looping through each point
for y = 1:y_max
for x = 1:x_max
if (frame(y,x) == 255)
MHI{frameIndex}(y,x) = T;
else
if (MHI{frameIndex-1}(y,x) > 1)
MHI{frameIndex}(y,x) = MHI{frameIndex-1}(y,x) - 1;
else
MHI{frameIndex}(y,x) = 0;
end
end
end
end
end
Code from: https://searchcode.com/codesearch/view/8509149/
Update #1:
Try to draw it as follows:
% showMHI.m
% Input frame number and motion history vector to display normalized MHI
% at the specified frame.
function showMHI(n, motion_history)
frameDisp = motion_history{n};
frameDisp = double(frameDisp);
frameDisp = frameDisp ./ 15;
figure, imshow(frameDisp)
title('MHI Image');

Compare two images and highlight differences along on the second image

Below is the current working code in python using PIL for highlighting the difference between the two images. But rest of the images is blacken.
Currently i want to show the background as well along with the highlighted image.
Is there anyway i can keep the show the background lighter and just highlight the differences.
from PIL import Image, ImageChops
point_table = ([0] + ([255] * 255))
def black_or_b(a, b):
diff = ImageChops.difference(a, b)
diff = diff.convert('L')
# diff = diff.point(point_table)
h,w=diff.size
new = diff.convert('RGB')
new.paste(b, mask=diff)
return new
a = Image.open('i1.png')
b = Image.open('i2.png')
c = black_or_b(a, b)
c.save('diff.png')
!https://drive.google.com/file/d/0BylgVQ7RN4ZhTUtUU1hmc1FUVlE/view?usp=sharing
PIL does have some handy image manipulation methods,
but also a lot of shortcomings when one wants
to start doing serious image processing -
Most Python lterature will recomend you to switch
to use NumPy over your pixel data, wich will give
you full control -
Other imaging libraries such as leptonica, gegl and vips
all have Python bindings and a range of nice function
for image composition/segmentation.
In this case, the thing is to imagine how one would
get to the desired output in an image manipulation program:
You'd have a black (or other color) shade to place over
the original image, and over this, paste the second image,
but using a threshold (i.e. a pixel either is equal or
is different - all intermediate values should be rounded
to "different) of the differences as a mask to the second image.
I modified your function to create such a composition -
from PIL import Image, ImageChops, ImageDraw
point_table = ([0] + ([255] * 255))
def new_gray(size, color):
img = Image.new('L',size)
dr = ImageDraw.Draw(img)
dr.rectangle((0,0) + size, color)
return img
def black_or_b(a, b, opacity=0.85):
diff = ImageChops.difference(a, b)
diff = diff.convert('L')
# Hack: there is no threshold in PILL,
# so we add the difference with itself to do
# a poor man's thresholding of the mask:
#(the values for equal pixels- 0 - don't add up)
thresholded_diff = diff
for repeat in range(3):
thresholded_diff = ImageChops.add(thresholded_diff, thresholded_diff)
h,w = size = diff.size
mask = new_gray(size, int(255 * (opacity)))
shade = new_gray(size, 0)
new = a.copy()
new.paste(shade, mask=mask)
# To have the original image show partially
# on the final result, simply put "diff" instead of thresholded_diff bellow
new.paste(b, mask=thresholded_diff)
return new
a = Image.open('a.png')
b = Image.open('b.png')
c = black_or_b(a, b)
c.save('c.png')
Here's a solution using libvips:
import sys
from gi.repository import Vips
a = Vips.Image.new_from_file(sys.argv[1], access = Vips.Access.SEQUENTIAL)
b = Vips.Image.new_from_file(sys.argv[2], access = Vips.Access.SEQUENTIAL)
# a != b makes an N-band image with 0/255 for false/true ... we have to OR the
# bands together to get a 1-band mask image which is true for pixels which
# differ in any band
mask = (a != b).bandbool("or")
# now pick pixels from a or b with the mask ... dim false pixels down
diff = mask.ifthenelse(a, b * 0.2)
diff.write_to_file(sys.argv[3])
With PNG images, most CPU time is spent in PNG read and write, so vips is only a bit faster than the PIL solution.
libvips does use a lot less memory, especially for large images. libvips is a streaming library: it can load, process and save the result all at the same time, it does not need to have the whole image loaded into memory before it can start work.
For a 10,000 x 10,000 RGB tif, libvips is about twice as fast and needs about 1/10th the memory.
If you're not wedded to the idea of using Python, there are a few really simple solutions using ImageMagick:
“Diff” an image using ImageMagick

Setting correct limits with imshow if image data shape changes

I have a 3D array, of which the first two dimensions are spatial, so say (x,y). The third dimension contains point-specific information.
print H.shape # --> (200, 480, 640) spatial extents (200,480)
Now, by selecting a certain plane in the third dimension, I can display an image with
imdat = H[:,:,100] # shape (200, 480)
img = ax.imshow(imdat, cmap='jet',vmin=imdat.min(),vmax=imdat.max(), animated=True, aspect='equal')
I want to now rotate the cube, so that I switch from (x,y) to (y,x).
H = np.rot90(H) # could also use H.swapaxes(0,1) or H.transpose((1,0,2))
print H.shape # --> (480, 200, 640)
Now, when I call:
imdat = H[:,:,100] # shape (480,200)
img.set_data(imdat)
ax.relim()
ax.autoscale_view(tight=True)
I get weird behavior. The image along the rows displays the data till 200th row, and then it is black until the end of the y-axis (480). The x-axis extends from 0 to 200 and shows the rotated data. Now on, another rotation by 90-degrees, the image displays correctly (just rotated 180 degrees of course)
It seems to me like after rotating the data, the axis limits, (or image extents?) or something is not refreshing correctly. Can somebody help?
PS: to indulge in bad hacking, I also tried to regenerate a new image (by calling ax.imshow) after each rotation, but I still get the same behavior.
Below I include a solution to your problem. The method resetExtent uses the data and the image to explicitly set the extent to the desired values. Hopefully I correctly emulated the intended outcome.
import matplotlib.pyplot as plt
import numpy as np
def resetExtent(data,im):
"""
Using the data and axes from an AxesImage, im, force the extent and
axis values to match shape of data.
"""
ax = im.get_axes()
dataShape = data.shape
if im.origin == 'upper':
im.set_extent((-0.5,dataShape[0]-.5,dataShape[1]-.5,-.5))
ax.set_xlim((-0.5,dataShape[0]-.5))
ax.set_ylim((dataShape[1]-.5,-.5))
else:
im.set_extent((-0.5,dataShape[0]-.5,-.5,dataShape[1]-.5))
ax.set_xlim((-0.5,dataShape[0]-.5))
ax.set_ylim((-.5,dataShape[1]-.5))
def main():
fig = plt.gcf()
ax = fig.gca()
H = np.zeros((200,480,10))
# make distinguishing corner of data
H[100:,...] = 1
H[100:,240:,:] = 2
imdat = H[:,:,5]
datShape = imdat.shape
im = ax.imshow(imdat,cmap='jet',vmin=imdat.min(),
vmax=imdat.max(),animated=True,
aspect='equal',
# origin='lower'
)
resetExtent(imdat,im)
fig.savefig("img1.png")
H = np.rot90(H)
imdat = H[:,:,0]
im.set_data(imdat)
resetExtent(imdat,im)
fig.savefig("img2.png")
if __name__ == '__main__':
main()
This script produces two images:
First un-rotated:
Then rotated:
I thought just explicitly calling set_extent would do everything resetExtent does, because it should adjust the axes limits if 'autoscle' is True. But for some unknown reason, calling set_extent alone does not do the job.

Resources