Zooming out on pandas table in Jupyter Notebooks - macos

I am attempting to take a screenshot of a correlation table made with pandas in Jupyter Notebooks but since it is very wide I must scroll to the right in order to view the whole table. On a mac it is not possible to scroll left or right while taking a screenshot so I am unable to capture the entire table. Is there anyway to get the entire table (doesn't have to be a screenshot an export of some type would work as well)?

Is the table already shown in markdown? If not, you can try df.to_html(). From there, you can convert it to pdf, see (https://www.npmjs.com/package/markdown-pdf)
Otherwise you can plot the pandas table with matplotlib and remove the axis etc.
import matplotlib.pyplot as plt
import pandas as pd
from pandas.table.plotting import table
ax = plt.subplot(111, frame_on=False) # no visible frame
ax.xaxis.set_visible(False) # hide the x axis
ax.yaxis.set_visible(False) # hide the y axis
table(ax, df) # where df is your data frame
plt.savefig('table.png')

Related

Different images in Image.show() and Image.save() in PIL

I generate a PIL image from a NumPy array. The image showed by show function differs from what is saved by the save function directly called after show. Why might that be the case? How can I solve this issue? I use TIFF file format. Viewing both images in Windows Photos App.
from PIL import Image
import numpy as np
orig_img = Image.open('img.tif'))
dent = Image.open('mask.tif')
img_np = np.asarray(orig_img)
dent_np = np.asarray(dent)
dented = img_np*0.5 + dent_np*0.5
im = Image.fromarray(dented)
im.show('dented')
im.save("dented_2.tif", "TIFF")
Edit: I figured out that the save function saves correctly if the values for pixel in the NumPy array called 'dented' are normalized to 0,1 range. However then show function shows the image completely black.
I suspect the problem is related to the dtype of your variable dented. Try:
print(img_np.dtype, dented.dtype)
As a possible solution, you could use:
im = Image.fromarray(dented.astype(np.uint8))
You don't actually need to go to Numpy to do the maths and then convert back if you want the mean of two images, because you can do that with PIL.
from PIL import ImageChops
mean = ImageChops.add(imA, imB, scale=2.0)

Handling matplotlib.figure.Figure using openCV

I am using the following code to find the spectrogram of a signal and save it.
spec,freq,t,im = plt.specgram(raw_signal,Fs=100,NFFT=100,noverlap=50)
plt.axis('off')
figure = plt.gcf()
figure.set_size_inches(12, 1)
plt.savefig('spectrogram',bbox_inches = 'tight',pad_inches=0)
But I have multiple spectrograms like this and the end product I need is a concatenation of all these. Right now, what I am doing is, I am saving all these individual images using plt.savefig() as earlier and reading them back using cv2.imread() and concatenating them. But this process is not very good I think. So is there any other way I can do this without saving it and re-reading it?
One possible idea I have is, somehow converting matplotlib.figure.Figure into a format that can be handled by OpenCV (specifically cv2). However, it should also not have white padding.
You can get the image as an array using buffer_rgba (don't forget to draw the image first). Then in OpenCV, you need to convert the image from RGB to OpenCV's BGR channel ordering.
import matplotlib.pyplot as plt
import numpy as np
import cv2
raw_signal = np.random.random(1000)
spec,freq,t,im = plt.specgram(raw_signal,Fs=100,NFFT=100,noverlap=50)
plt.axis('off')
figure = plt.gcf()
figure.set_size_inches(12, 1)
figure.set_dpi(50)
figure.canvas.draw()
b = figure.axes[0].get_window_extent()
img = np.array(figure.canvas.buffer_rgba())
img = img[int(b.y0):int(b.y1),int(b.x0):int(b.x1),:]
img = cv2.cvtColor(img, cv2.COLOR_RGBA2BGRA)
cv2.imshow('OpenCV',img)
Top: matplotlib, bottom OpenCV:
don't save the figure. matplotlib happens to have a convenience function for displaying time series data in this way but that's not how you deal with spectrograms. any handling of spectrogram "pictures" is a kludge.
use scipy.signal.spectrogram to get the actual spectrogram.

trellis layered plots in altair/Vega-Lite

I would like to compare multiple conditions of an altair (ultimately vega-lite) layered plot. The perfect solution would be to facet/trellis the plot so I can see the different conditions side by side. Unfortunately I cannot figure out how to give the command to plot the different conditions.
Here is my attempt to implement my idea based on the example for layered plots:
(https://github.com/ellisonbg/altair/blob/master/altair/notebooks/07-LayeredCharts.ipynb)
import pandas as pd
import numpy as np
data = pd.DataFrame({'x':np.random.rand(10), 'y':np.random.rand(10), 'z':['a', 'b']*5})
chart = LayeredChart(data)
chart += Chart().mark_line().encode(x='x:Q', y='y:Q', column='z:Q')
chart += Chart().mark_point().encode(x='x:Q', y='y:Q', column='z:Q')
chart
When compared with the example I added the column 'z' with the two conditions, and the two column statements in the Chart definitions.
This solution generates seemingly good Vega-lite code, but no plot. Alternatively I tried "chart = LayeredChart(data).encode(column='z:Q')" but I then got the error 'LayeredChart' object has no attribute 'encode'
I am wondering whether it is possible to facet (trellis) layered plots at all and whether it will be possible in future Vega-Lite releases.
I am using jupyter with Anaconda
Layering is only experimentally supported in the current release of Vega-Lite and Altair, and I believe you've hit one of the unsupported aspects. This should be addressed in the Vega-Lite 2.0 release (and associated Altair release) later this spring.

Python 3.4: Where is the image library?

I am trying to display images with only builtin functions, and there are plenty of Tkinter examples online. However, none of the libraries work:
import Image # none of these exist.
import tkinter.Image
import _tkinter.Image
etc
However, tkinter does exist, a hellow-world with buttons worked fine.
I am on a MacBook pro 10.6.8 and using PyCharm.
Edit: The best way so far (a little slow but tolerable):
Get the pixel array as a 2D list (you can use a third-party .py to load your image).
Now you make a data array from the pixels like this (this is the weirdest format I have seen, why not a simple 2D array?). This may be sideways, so you may get an error for non-square images. I will have to check.
Imports:
from tkinter import *
import tkinter
data = list() # the image is x pixels by y pixels.
y = len(pixels)
x = len(pixels[0])
for i in range(y):
col_str.append('{')
for j in range(x):
data.append(pixels[i][j]+" ")
data.append("} ")
data = "".join(data)
Now you can create an image and use put:
# PhotoImage is builtin (tkinter).
# It does NOT need PIL, Pillow, or any other externals.
im = PhotoImage(width=x, height=y)
im.put(col_str)
Finally, attach it to the canvas:
canvas = tkinter.Canvas(width=x, height=y)
canvas.create_image(x/2, y/2, image=GLOBAL_IMAGE) # x/2 and y/2 are the center.
tK.mainloop() # enter the main loop and it will be drawn.
Image must be global or else it may not show up because the garbage collector gets greedy.
PIL hasn't been updated since 2009, with Python 3 support being terminally stuck at "later."
Instead, try pillow, which has forked PIL and provides Python 3 support.

adding contrast to an image in python

I am trying to work through a process where I take an astronomical fits file, subtract a masterflat file and then later the contrast on the resultant image.
The first part has been done successfully but my image lacks contrast. Here's my code
from astropy.io.fits import getdata
import numpy
import numpy as np
import scipy
import Image
import PIL
import os
os.chdir("/localdir/")
from scipy import misc
import ImageEnhance
image = getdata('23484748.fts')
flat = getdata('Masterflat.fit')
normalized_flat = flat / numpy.mean(flat)
calibrated_image = image / normalized_flat
pix=numpy.fliplr(calibrated_image)
# the problem starts about here. How do I alter the contrast of pix?
from matplotlib import pyplot as plt
misc.imsave('saved image.gif', pix) # uses the Image module (PIL)
plt.imshow(pix, interpolation='nearest')
plt.show()
Now before you tell me all about PIL functions and Matlib etc I have tried these without success.
I have tried to use image.fromarray to convert my numpy array into an image but the resultant image displays as pure white.
How can I take my numpy array (pix) and change its contrast?
For testing purposes I have put the two sample files at http://members.optusnet.com.au/berrettp/
Thanking you
Peter

Resources