I've been looking for a while for a means of fast plotting of large heatmap in a python based UI. In the past, I have used the backends available for matplotlib ' contourf, pcolor, and pcolormesh. I have not used imshow as my typical data lies on a polar plane (radar). In matplotlib I would the following
from matplotlib import Figure
import numpy as np
# some class initialization
self.fig = Figure ()
self.axes = self.fig.add_subplot (111, polar=True)
def plotter (self):
self.axes.cla ()
self.axes.pcolormesh (self.thetas, self.ranges, self.data)
self.axes.draw ()
I've been investigating the OPENGL libraries, and I like the gentle introduction that I've witnessed with vispy. At the end of the day, I would like to find the simplest means to define a set of 2D vertexes and be able to vary the color associated with them, and fill the pixels inbetween; either in a linear gradient or solid polygon way. While I don't fully grasp textures, I can invision defining many triangles and their colors, but this seems inefficient. There must be a straightforward means to define points and their colors, and fill in between.
Something like
from vispy import gloo, app
app.use_app('pyside')
import numpy as np
VERTEX = '''
attribute vec2 position;
attribute vec4 color;
varying vec4 v_color;
void main() {
v_color = color;
gl_Position = vec4(position, 0.0, 1.0);
}
'''
FRAGMENT = '''
varying vec4 v_color
void main() {
// magic
}
'''
class PolarHeatmapWidget(app.Canvas):
def __init__(self,_Ntheta,_Nr, **kwargs):
app.Canvas.__init__(self, size=(400,400), **kwargs)
self.Ntheta = _Ntheta
self.Nr = _Nr
self.program = gloo.Program (VERTEX, FRAGMENT)
self.initializeData()
self.program ['position'] = gloo.VertexBuffer(self.positions)
self.program ['color'] = gloo.VertexBuffer(self.colors)
self.apply_zoom()
def on_draw(self, event):
gloo.clear()
self.program.draw(MORE_MAGIC)
def intialize(self):
self.show()
def on_resize(self, event):
self.apply_zoom()
def apply_zoom(self):
minsize = min(self.physical_size[0], self.physical_size[1])
gloo.set_viewport(self.physical_size[0] / 2 - minsize / 2, \
self.physical_size[1] / 2 - minsize / 2, \
minsize, minsize)
self.update()
def initializeData(self):
ranges = np.linspace(0,1,self.Nr)
thetas = np.radians(np.linspace(0,360,self.Ntheta))
self.positions = np.zeros((self.Ntheta*self.Nr, 2), dtype=np.float32)
for t in xrange(self.Ntheta):
for r in xrange(self.Nr):
self.positions[t*self.Nr+r][0] = ranges[r] * np.cos(thetas[t])
self.positions[t*self.Nr+r][1] = ranges[r] * np.sin(thetas[t])
self.colors = np.zeros((self.Ntheta*self.Nr, 4), dtype=np.float32)
self.colors[:,3] += 1
I apologize for the brief code - from a phone but my goal is to end up with a full class in this thread to help myself and others involved in data science understand graphics at a lower level to accelerate rendering.
Thanks in advance and my apologies if I've missed something obvious in documentation.
Related
I am trying to implement FRST on python to detect centroids of elliptical objects (e.g. cells in microscopy images), but my implementation does not find seed points (more or less center points) of elliptical objects. This effort comes from duplicating FRST from Segmentation of Overlapping Elliptical Objects in Silhouette Images (https://ieeexplore.ieee.org/document/7300433). I don't know why I have these artifacts. An interesting thing is that I see these patterns (crosses) all in the same direction per object. Any point in the right direction to generate the same result as in the paper (just to find the seed points) will be most welcome.
Original Paper: A Fast Radial Symmetry Transform for Detecting Points of Interest by Loy and Zelinsky (ECCV 2002)
I have also tried the pre-existing python package for FRST: https://pypi.org/project/frst/. This somehow results in the same artifacts. Weird.
First image: Original Image
Second image: Sobel-operated Image
Third image: Magnitude Projection Image
Fourth image: Magnitude Projection Image with positively affected pixels only
Fifth image: FRST'd image: end-product with original image overlaid (shadowed)
Sixth image: FRST'd image by the pre-existing python package with original image overlaid (shadowed).
from scipy.ndimage import gaussian_filter
import numpy as np
from scipy.signal import convolve
# Get orientation projection image
def get_proj_img(image, radius):
workingDims = tuple((e + 2*radius) for e in image.shape)
h,w = image.shape
ori_img = np.zeros(workingDims) # Orientation Projection Image
mag_img = np.zeros(workingDims) # Magnitutde Projection Image
# Kenels for the sobel operator
a1 = np.matrix([1, 2, 1])
a2 = np.matrix([-1, 0, 1])
Kx = a1.T * a2
Ky = a2.T * a1
# Apply the Sobel operator
sobel_x = convolve(image, Kx)
sobel_y = convolve(image, Ky)
sobel_norms = np.hypot(sobel_x, sobel_y)
# Distances to afpx, afpy (affected pixels)
dist_afpx = np.multiply(np.divide(sobel_x, sobel_norms, out = np.zeros(sobel_x.shape), where = sobel_norms!=0), radius)
dist_afpx = np.round(dist_afpx).astype(int)
dist_afpy = np.multiply(np.divide(sobel_y, sobel_norms, out = np.zeros(sobel_y.shape), where = sobel_norms!=0), radius)
dist_afpy = np.round(dist_afpy).astype(int)
for cords, sobel_norm in np.ndenumerate(sobel_norms):
i, j = cords
pos_aff_pix = (i+dist_afpx[i,j], j+dist_afpy[i,j])
neg_aff_pix = (i-dist_afpx[i,j], j-dist_afpy[i,j])
ori_img[pos_aff_pix] += 1
ori_img[neg_aff_pix] -= 1
mag_img[pos_aff_pix] += sobel_norm
mag_img[neg_aff_pix] -= sobel_norm
ori_img = ori_img[:h, :w]
mag_img = mag_img[:h, :w]
print ("Did it go back to the original image size? ")
print (ori_img.shape == image.shape)
# try normalizing ori and mag img
return ori_img, mag_img
def get_sn(ori_img, mag_img, radius, kn, alpha):
ori_img_limited = np.minimum(ori_img, kn)
fn = np.multiply(np.divide(mag_img,kn), np.power((np.absolute(ori_img_limited)/kn), alpha))
# convolute fn with gaussian filter.
sn = gaussian_filter(fn, 0.25*radius)
return sn
def do_frst(image, radius, kn, alpha, ksize = 3):
ori_img, mag_img = get_proj_img(image, radius)
sn = get_sn(ori_img, mag_img, radius, kn, alpha)
return sn
Parameters:
radius = 50
kn = 10
alpha = 2
beta = 0
stdfactor = 0.25
I think these should be circular. I assume there is something wrong with my normals but I haven't found anything wrong with them. Then again, finding a good test for the normals is difficult.
Here is the image:
Here is my shading code for each light, leaving out the recursive part for reflections:
lighting = ( hit.obj.ambient + hit.obj.emission );
const glm::vec3 view_direction = glm::normalize(eye - hit.pos);
const glm::vec3 reflection = glm::normalize(( static_cast<float>(2) * ( glm::dot(view_direction, hit.normal) * hit.normal ) ) - view_direction);
for(int i = 0; i < numused; ++i)
{
glm::vec3 hit_to_light = (lights[i].pos - hit.pos);
float dist = glm::length(hit_to_light);
glm::vec3 light_direction = glm::normalize(hit_to_light);
Ray lightray(hit.pos, light_direction);
Intersection blocked = Intersect(lightray, scene, verbose ? verbose : false);
if( blocked.dist >= dist)
{
glm::vec3 halfangle = glm::normalize(view_direction + light_direction);
float specular_multiplier = pow(std::max(glm::dot(halfangle,hit.normal), 0.f), shininess);
glm::vec3 attenuation_term = lights[i].rgb * (1.0f / (attenuation + dist * linear + dist*dist * quad));
glm::vec3 diffuse_term = hit.obj.diffuse * ( std::max(glm::dot(light_direction,hit.normal) , 0.f) );
glm::vec3 specular_term = hit.obj.specular * specular_multiplier;
}
}
And here is the line where I transform the object space normal to world space:
*norm = glm::normalize(transinv * glm::vec4(glm::normalize(p - sphere_center), 0));
Using the full phong model, instead of blinn-phong, I get teardrop highlights:
If I color pixels according to the (absolute value of the) normal at the intersection point I get the following image (r = x, g = y, b = z):
I've solved this issue. It turns out that the normals were all just slightly off, but not enough that the image colored by normals could depict it.
I found this out by computing the normals on spheres with a uniform scale and a translation.
The problem occurred in the line where I transformed the normals to world space:
*norm = glm::normalize(transinv * glm::vec4(glm::normalize(p - sphere_center), 0));
I assumed that the homogeneous coordinate would be 0 after the transformation because it was zero beforehand (rotations and scales do not affect it, and because it is 0, neither can translations). However, it is not 0 because the matrix is transposed, so the bottom row was filled with the inverse translations, causing the homogeneous coordinate to be nonzero.
The 4-vector is then normalized and the result is assigned to a 3-vector. The constructor for the 3-vector simply removes the last entry, so the normal was left unnormalized.
Here's the final picture:
is it possible to draw a perfect horizontal line of a single pixel height at any chosen position on the vertical axis with a fragment shader applied to a screen aligned quad?
I have found many solutions with smoothstep or more complex functions but i am looking for an elegant and fast way of doing this.
A solution i have made is by using an exponential function and making it steeper but it have many shortcomings that i don't want (the line is not really one pixel height due to the exponential function and it is rather tricky to get one right), here is the GLSL code :
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
vec2 uv = fragCoord.xy / iResolution.xy;
// a centered horizontal line
float v = pow(uv.y - 0.5, 2.);
// make it steeper
v *= 100000.;
// make it white on a black background
v = clamp(1. - v, 0., 1.);
fragColor = vec4(v);
}
Here is the shadertoy code which execute this: https://www.shadertoy.com/view/Ms2cWh
What i would like :
a perfect horizontal line drawn to a specific Y position in pixels units or normalized
its intensity limited to [0, 1] range without clamping
a fast way of doing it
If you just want to:
draw a perfect horizontal line of a single pixel height at any chosen
position on the vertical axis with a fragment shader applied to a
screen aligned quad
, then maybe:
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
int iPosition = 250; // the y coord in pixels
int iThickness = 10; // the thickness in pixels
vec2 uv = fragCoord.xy / iResolution.xy;
float v = float( iPosition ) / iResolution.y;
float vHalfHeight = ( float( iThickness ) / ( iResolution.y ) ) / 2.;
if ( uv.y > v - vHalfHeight && uv.y < v + vHalfHeight )
fragColor = vec4(1.,1.,1.,1.); // or whatever color
}
Here is a neat solution without branching. I don't know if it is really faster than with branching though.
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
vec2 uv = fragCoord.xy / iResolution.xy;
float py = iMouse.y/iResolution.y;
float hh = 1./iResolution.y;
// can also be replace with step(0., hh-abs(uv.y-py))
float v = sign(hh-abs(uv.y-py));
fragColor = vec4(v);
}
I know the question was answered properly before me, but in case someone is looking for a way to render a textured line in a pixel perfect way I wrote an article with some examples.
It's about pixel perfect UI in general, but using it for a line is just a matter of clamping/repeating texture sampling. Also I'm using Unity, but there is no reason the method would be exclusive to it.
I wish to appear a figure (and certain text) as if they are printed on a page of an open book. Is it possible to transform an jpg image programmatically or in matplotlib to have such an effect?
You can use a background axis along with an open source book image to do something like this,
import numpy as np
import matplotlib.pyplot as plt
fig = plt.figure()
ax1 = fig.add_axes([0.1, 0.1, 0.8, 0.8])
ax2 = fig.add_axes([0.2, 0.3, 0.25, 0.3])
#Plot page from a book
im = plt.imread("./book_page.jpg")
implot = ax1.imshow(im, origin='lower')
# Plot a graph and set background to transparent
x = np.linspace(0,4.*np.pi,40)
y = np.sin(x)
ax2.plot(x,y,'-ro',alpha=0.5)
ax2.set_ylim([-1.1,1.1])
ax2.patch.set_alpha(0.0)
from matplotlib import rc
rc('text', usetex=True)
margin = im.shape[0]*0.075
ytext = im.shape[1]/2.+10
ax1.text(margin, ytext, "The following text is an example")
ax1.text(margin, 90, "Figure 1. Showing a sine function")
plt.show()
Which looks like this,
where I used the following book image.
UPDATE: Added non-affine transformation based on scikit-image warp example, but with Maxwell distribution. The solution saves the matplotlib line as an image in order to apply a pointwise transform. Mapping for vector graphics may be possible but I think this will be more complicated...
import numpy as np
import matplotlib.pyplot as plt
def maxwellian_transform_image(image):
from skimage.transform import PiecewiseAffineTransform, warp
rows, cols = image.shape[0], image.shape[1]
src_cols = np.linspace(0, cols, 20)
src_rows = np.linspace(0, rows, 10)
src_rows, src_cols = np.meshgrid(src_rows, src_cols)
src = np.dstack([src_cols.flat, src_rows.flat])[0]
# add maxwellian to row coordinates
x = np.linspace(0, 3., src.shape[0])
dst_rows = src[:, 1] + (np.sqrt(2/np.pi)*x**2 * np.exp(-x**2/2)) * 50
dst_cols = src[:, 0]
dst_rows *= 1.5
dst_rows -= 1.0 * 50
dst = np.vstack([dst_cols, dst_rows]).T
tform = PiecewiseAffineTransform()
tform.estimate(src, dst)
out_rows = image.shape[0] - 1.5 * 50
out_cols = cols
out = warp(image, tform, output_shape=(out_rows, out_cols))
return out
#Create the new figure
fig = plt.figure()
ax = fig.add_axes([0.1, 0.1, 0.8, 0.8])
#Plot page from a book
im = plt.imread("./book_page.jpg")
implot = ax.imshow(im, origin='lower')
# Plot and save graph as image, will need some manipulation of location
temp, at = plt.subplots()
margin = im.shape[0]*0.1
x = np.linspace(margin,im.shape[0]/2.,40)
y = im.shape[1]/3. + 0.1*im.shape[1]*np.sin(12.*np.pi*x/im.shape[0])
at.plot(x,y,'-ro',alpha=0.5)
temp.savefig("lineplot.png",transparent=True)
#Read in plot as an image and apply transform
plot = plt.imread("./lineplot.png")
out = maxwellian_transform_image(plot)
ax.imshow(out, extent=[0,im.shape[1],0,im.shape[0]])
plt.show()
The figure now looks like,
I'm plotting some coastline data on a sphere using the vispy interface for OpenGL ES 2.0. I'm using latitude and longitude values to work out 3d coordinates for the data on a sphere and just plotting these. I'm able to successfully draw the data but I wanted to see only those data points on the side of the sphere that would be visible from the viewport.
I've tried two quite different approaches to create this effect but both have led to the same problem. First, I calculated the dot product of the view direction and data position and drew only those with a negative result (i.e. only those points facing the viewport) and, secondly, I simply drew a plane through the centre of the sphere, perpendicular to the view direction.
In both cases I observed the same - the plane appeared to be slightly offset away from the viewport, behind the centre of the sphere. In other words, you can see the data wrap around the back of the sphere slightly before it's masked by the plane.
I've checked that the points I'm drawing are, in fact, on the unit sphere and I feel confident that, from the 3d world point of view, everything is sound. What I am less confident with, as a relative beginner to 3d graphics, is whether I'm misunderstanding something with the projection matrix. I've done some reading - but my understanding leads me to think that the projection shouldn't change the order of points in the "Z direction" (the direction the viewport is facing).
I'm confident this isn't a depth test issue as my first approach didn't have depth test enabled and masking was done in the vertex shader (by setting the fragment colour alpha to 0.0). Aside from this, I've not been able to find any other explanation for the issue.
Here's the code for the plane approach:
import numpy as np
import cartopy
from vispy import app
from vispy import gloo
import time
from vispy.util.transforms import perspective, translate, rotate
xpts = []
ypts = []
#getting coastlines data
for string in cartopy.feature.NaturalEarthFeature('physical', 'coastline', '10m').geometries():
for line in string:
points = list(line.coords)
for point in points:
xpts.append(point[0])
ypts.append(point[1])
coasts = np.array(zip(xpts,ypts), dtype=np.float32)
theta = (np.pi/180)*np.array(xpts, dtype=np.float32)
phi = (np.pi/180)*np.array(ypts, dtype=np.float32)
x3d = np.cos(phi)*np.cos(theta)
y3d = np.sin(theta)*np.cos(phi)
z3d = np.sin(phi)
vertex = """
// Uniforms
uniform mat4 u_model;
uniform mat4 u_view;
uniform mat4 u_projection;
uniform vec3 u_color;
attribute vec3 a_position;
void main (void)
{
gl_Position = u_projection*u_view*u_model*vec4(a_position, 1.0);
}
"""
fragment = """
// Uniforms
uniform vec3 u_color;
void main()
{
gl_FragColor = vec4(u_color, 1.0);
}
"""
class Canvas(app.Canvas):
def __init__(self):
app.Canvas.__init__(self, keys='interactive')
gloo.set_state(clear_color = 'red', depth_test=True, blend=True, blend_func=('src_alpha', 'one_minus_src_alpha'))
self.x = 0
self.plane = 5*np.array([(0.,-1., -1.,1), (0, -1., +1.,1), (0, +1., -1.,1), (0, +1., +1.,1)], dtype=np.float32)
self._timer = app.Timer(connect=self.on_timer, start=True)
self.program = gloo.Program(vertex, fragment)
self.view = np.dot(rotate(-90, (1, 0, 0)), np.dot(translate((-3, 0, 0)), rotate(-90.0, (0.0,1.0,0.0))))
self.model = np.eye(4, dtype=np.float32)
self.projection = perspective(45.0, self.size[0]/float(self.size[1]), 2.0, 10.0)
self.program['u_projection'] = self.projection
self.program['u_view'] = self.view
self.program['u_model'] = self.model
self.program['u_color'] = np.array([0.0, 0.0, 0.0], dtype=np.float32)
self.program2 = gloo.Program(vertex, fragment)
self.program2['u_projection'] = self.projection
self.program2['u_view'] = self.view
self.program2['u_model'] = self.model
self.program2['u_color'] = np.array([1.0, 1.0, 1.0], dtype=np.float32)
self.program2['a_position'] = self.plane[:,:3].astype(np.float32)
def on_timer(self, event):
self.x += 0.05
self.model = rotate(self.x, (0.0,0.0,1.0))
pointys = np.concatenate((x3d,y3d,z3d)).reshape((3, -1)).T
self.program['a_position'] = pointys
self.program['u_model'] = self.model
self.update()
def on_resize(self, event):
gloo.set_viewport(0, 0, *event.size)
self.projection = perspective(45.0, event.size[0]/float(event.size[1]), 2.0, 10.0)
self.program['u_projection'] = self.projection
self.program2['u_projection'] = self.projection
def on_draw(self, event):
gloo.clear((1,1,1,1))
self.program2.draw('triangle_strip')
self.program.draw('points')
Canvas().show()
app.run()
The way I understand your description, what you're seeing is a result of the perspective projection. I used all of my MS Paint skills to create this very elaborate diagram of the situation viewed from the side:
The outline of the sphere is drawn in black. The red line indicates a plane through the center of the sphere.
The blue lines show two lines of sight from the viewpoint, which is at the bottom of the diagram. If you picture the result after applying the projection, what shows up as the front facing part of the sphere in the rendered image is everything below the green line. The parts of the sphere above the green line form the back facing part of the sphere in the resulting rendering.
Or in other words, the green line shows the plane that corresponds to the outline of the sphere in the resulting rendering.
As you can see from this, the plane through the center of the sphere is indeed some distance behind the section of the sphere that shows up as the front facing part of the sphere in the rendered image. This is just in the nature of a perspective projection. The distance between the red plane and the green plane will decrease with a smaller viewing angle (i.e. a weaker perspective), and the two are the same when using a parallel projection.