I have a large fits file (over 30,000 x 30,000) pixels. IRAF cannot handle this size of image. How can one crop a file of this size while retaining correct header information, as IRAF does when using its standard cropping mode?
You can do this sort of cropping with astropy.io.fits, though it's not trivial yet. Since astropy.io.fits uses memory mapping by default, it should be able to handle arbitrarily large files (within some practical limits). If you want non-python solutions, look here for details about postage stamp creation.
from astropy.io import fits
from astropy import wcs
f = fits.open('file.fits')
w = wcs.WCS(f[0].header)
newf = fits.PrimaryHDU()
newf.data = f[0].data[100:-100,100:-100]
newf.header = f[0].header
newf.header.update(w[100:-100,100:-100].to_header())
See also this pull request, which implements a convenience Cutout2D function, though this is not yet available in a released version of astropy. Its usage can be seen in the documentation, modified to include WCS:
from astropy.nddata import Cutout2D
position = (49.7, 100.1)
shape = (40, 50)
cutout = Cutout2D(f[0].data, position, shape, wcs=w)
There are more examples here
Related
I am currently trying to make some TensorFlow Inference (C backend) using Boost::GIL (challenging). I need a few thinks, I have been able to load my png image (rgb8_image_t)
and did a conversion to rgb32_f_image_t.
I still need 3 thinks, the raw pointer of the data, memory allocated, and dimensions.
for the memory allocated unfortunately the function total_allocated_size_in_bytes() is private, so I did this:
boost::gil::view(dest).size() * boost::gil::view(dest).num_channels() * sizeof(value_type);
Which is valid, if I do not have any extra padding for alignment story. But does it exist any nice alternative?
For the dimension, I should match with numpy (from PILLOW), I hope both libraries are using the same memory layout pattern. From my understanding, by default, datas are interleaved and contiguous so, it should be good.
Last the raw pointer _memory, it is a private data member of the Image class with no dedicated function. boost::gil::view(dest).row_begin(0) returns a iterator on the first pixel but I not sure how I could get the pointer of the data _memory. Any suggestions ?
Thank you very much,
++t
ps: TensorFlow proposes a C++ backend, however, it is not installed from any package managers, and manipulate Bazel is beyond my strength.
GIL documentation pretty accurately documents the various memory layouts.
The point of the library, though, is to abstract away the memory layouts. If you require some representation (planar/interleaved, packed or unpacked) you are doing things "the hard way" for the library interface.
So, I think you can read and convert in one go, e.g. for a jpeg:
gil::rgb32f_image_t img;
gil::image_read_settings<gil::jpeg_tag> settings;
read_and_convert_image("input.jpg", img, settings);
Now getting the raw data is possible:
auto* raw_data = gil::interleaved_view_get_raw_data(view(img));
It happens to be the case that the preferred implementation storage is interleaved, which is likely what you're expecting. If your particular image storage is planar, the call will not compile (and you'd probably want planar_view_get_raw_data(vw, plane_index) instead).
Note that you'll have to reinterpret_cast to float [const]* if you need that, because there is not public interface to get a reference to the scoped_channel_value<>::value_, but the BaseChannelValue type is indeed float and you can assert that the wrapper doesn't add additional weight:
static_assert(sizeof(float) == sizeof(raw_data[0]));
Alternative Approach:
Conversely, you can setup your own raw pixel buffer, mount a mutable view into it and use that to read/convert your initial load into:
// get dimension
gil::image_read_settings<gil::jpeg_tag> settings;
auto info = gil::read_image_info("input.jpg", settings).get_info();
// setup raw pixel buffer & view
using pixel = gil::rgb32f_pixel_t;
auto data = std::make_unique<pixel[]>(info._width * info._height);
auto vw = gil::interleaved_view(info._width, info._height, data.get(),
info._width * sizeof(pixel));
// load into buffer
read_and_convert_view("input.jpg", vw, settings);
I've actually checked that it works correctly by writing out the resulting view:
//// just for test - doesn't work for 32f, so choose another pixel format
//gil::write_view("output.png", vw, gil::png_tag());
In an old version of my code, I used to do a hardcopy() with a given resolution, ie:
frame = hardcopy(figHandle, ['-d' renderer], ['-r' num2str(round(pixelsperinch))]);
For reference, hardcopy saves a figure window to file.
Then I would typically perform:
ZZ = rgb2gray(frame) < 255/2;
se = strel('disk',diskSize);
ZZ2 = imdilate(ZZ,se); %perform dilation.
Surface = bwarea(ZZ2); %get estimated surface (in pixels)
This worked until I switched to Matlab 2017, in which the hardcopy() function is deprecated and we are left with the print() function instead.
I am unable to extract the data from figure handler at a specific resolution using print. I've tried many things, including:
frame = print(figHandle, '-opengl', strcat('-r',num2str(round(pixelsperinch))));
But it doesn't work. How can I overcome this?
EDIT
I don't want to 'save' nor create a figure file, my aim is to extract the data from the figure in order to mesure a surface after a dilation process. I just want to keep this information and since 'im processing a LOT of different trajectories (total is approx. 1e7 trajectories), i don't want to save each file to disk (this is costly, time execution speaking). I'm running this code on a remote server (without a graphic card).
The issue I'm struggling with is: "One or more output arguments not assigned during call to "varargout"."
getframe() does not allow for setting a specific resolution (it uses current resolution instead as far as I know)
EDIT2
Ok, figured out how to do, you need to pass the '-RGBImage' argument like this:
frame = print(figHandle, ['-' renderer], ['-r' num2str(round(pixelsperinch))], '-RGBImage');
it also accept custom resolution and renderer as specified in the documentation.
I think you must specify formattype too (-dtiff in my case). I've tried this in Matlab 2016b with no problem:
print(figHandle,'-dtiff', '-opengl', '-r600', 'nameofmyfig');
EDIT:
If you need the CData just find the handle of the corresponding axes and get its CData
f = findobj('Tag','mytag')
Then depending on your matlab version use:
mycdata = get(f,'CData');
or directly
mycdta = f.CData;
EDIT 2:
You can set the tag of your image programatically and then do what I said previously:
a = imshow('peppers.png');
set(a,'Tag','mytag');
I've been attempting to use the MAFFT command line tool as a means to identify coding regions within a genome. My general process is to align the amino acid consensus sequence of a gene to a translated reading frame of a target sequence. My method has been largely successful. However, I've noticed some peculiar alignments which will unfortunately impede my annotation method. The following is one such example (Note - I've also included a pairwise alignment from the Pairwise2 Biopython module to demonstrate my desired output. Unfortunately, the computation time for Pairwise2 is nearly 20 times slower than MAFFT command line):
from time import *
from Bio.SubsMat import MatrixInfo as matlist
from Bio import pairwise2
from Bio.pairwise2 import format_alignment
from Bio.Align.Applications import MafftCommandline
startTime = time()
sample_tList = [['>Frame 1', 'RIGVGSIPRHLYCQELPLAQPKTCCAETPFRDSPLQGRLGVCPHLASGVALLYGLSTPLTMSGILDRCTCTPNARVFMAEGQVYCTRCLSARSLLPLNLQVPELGVLGLFYRPEEPLRWTLPRAFPTVECSPAGACWLSAIFPIARMTSGNLNFQQRMVRVAAEIYRAGQLTPAVLKVLQVYERGCRWYPIVGPVPGVGVYANSLHVSDKPFPGATHVLTNLPLPQRPKPEDFCPFECAMADVYDIGHGAVMFVAGGKVSWAPRGGDEVRFETVPEELKLIANRLHISFPPHHLVDMSKFAFIVPGSGVSLRVEHQHGCLPADIVPKGNCWWCLFDLLPPGVQNREIRYANQFGYQTKHGVSGKYLQRRLQINGLRAVTDTHGPIVVQYFSVKESWIRHFRLAGEPSLPGFEDLLRIRVESNTSPLADKDEKIFRFGSHKWYGAGKRARKARSGATTTVAHRASSARETRQAKKHEGVDANNAAHLEHYSPPAEGNCGWHCISAIVNRMVNSNFETTLPERVRPSDDWATDEDFVNTIQILRLPAALDRNGACKSAKYVLKLEGEHWTVSVAPGMSPSLLPLECVQGCCEHKGGLGSPDAVEVSGFDPTCLDRLAEVMHLPSSVIPAALAEMSNNSDRPASLVNTAWTVSQFYARHTGGNHRDQVRLGKIISLCQVIEECCCHQNKTNRATPEEVAAKIDQYLRGATSLEECLIKLERVSPPSAADTSFDWNVVLPGVEAAGPTTEQPHANQCCAPVPVVTQEPLDKDSVPLTAFSLSNCYYPAQGDEVRHRERLNSVLSKLEEVVLEEYGLMPTGLGPRPVLPSGLDELKDQMEEDLLKLANAQATSEMMALAAEQVDLKAWVKSYPRWIPPPPPPKVQPRRMKPVKSLPENKPVPAPRRKVRSDPGKSILAVGGPLNFSTPSELVTPLGEPVLMPASQHVSRPVTPLSEPAPVPAPRRIVSRPMTPLSEPTFVFAPWRKSQQVEEANPAAATLTCQDEPLDLSASSQTEYEAYPLAPLENIGVLEAGGQEAEEVLSGISDILDNTNPAPVSSSSSLSSVKITRPKYSAQAIIDSGGPCSGHLQKEKEACLRIMREACDAARLGDPATQEWLSHMWDRVDVLTWRNTSVYQAFRTLDGRFGFLPKMILETPPPYPCGFVMLPHTPTPSVSAESDLTIGSVATEDVPRILGKTENTGNVLNQKPLALFEEEPVCDQPAKDSRTLSRESGDSTTAPPVGTGGAGLPTDLPPLDGVDADGGGLLRTAKGKAERFFDQLSRQVFNIVSHLPVFFSHLFKSDSGYSPGDWGFAAFTLFCLFLCYSYPFFGFAPLLGVFSGSSRRVRMGVFGCWLAFAVGLFKPVSDPVGAACEFDSPECRNILHSFELLKPWDPVRSLVVGPVGLGLAILGRLLGGARYIWHFLLRLGIVADCILAGAYVLSQGRCKKCWGSCIRTAPNEIAFNVFPFTRATRSSLIDLCDRFCAPKGMDPIFLATGWRGCWTGQSPIEQPSEKPIAFAQLDEKRITARTVVSQPYDPNQAVKCLRVLQAGGAMVAEAVPKVVKVSAIPFRAPFFPTGVKVDPECRIVVDPDTFTTALRSGYSTTNLVLGVGDFAQLNGLKIRQISKPSGGGPHLIAALHVACSMVLHMLAGVYVTAVGSCGTGTSDPWCANPFAVPGYGPGSLCTSRLCISQHGLTLPLTALVAGFGLQEIALVVLIFVSIGGMAHRLSCKADMLCILLAIASYVWVPLTWLLCVFPCWLRWFSLHPLTILWLVFFLISVNMPSGILAVVLLVSLWLLGRYTNIAGLVTPYDIHHYTSGPRGVAALATAPDGTYLAAVRRAALTGRTMLFTPSQLGSLLEGAFRTRKPSLNTVNVVGSSMGSGGVFTIDGRIKCVTAAHVLTGNSARVSGVGFNQMLDFDVKGDFAIADCPNWQGVAPKTQFCGDGWTGRAYWLTSSGVEPGVIGDGFAFCFTACGDSGSPVITEAGELVGVHTGSNKQGGGIVTRPSGQFCNVTPIKLSELSEFFAGPKVPLGDVKVGSHIIKDTSEVPSDLCALLAAKPELEGGLSTVQLLCVFFLLWRMMGHAWTPLVAVGFFILNEVLPAVLVRSVFSFGMFALSWLTPWSAQVLMIRLLTAALNRNRVSLIFYSLGAVTGFVADLATTQGHPLQAVMNLSTYAFLPRMMVVTSPVPAIACGVVHLLAIILYLFKYRCLHHVLVGDGAFSAAFFLRYFAEGKLREGVSQSCGMSHESLTGALAIKLSDEDLDFLTKWTDFKCFVSASNMRNAAGQFIEAAYAKALRIELAQLVQVDKVRGTLAKLEAFADTVAPQLSPGDIVVALGHTPVGSIFDLKVGSTKHTLQAIETRVLAGSKMTVARVVDPTPAPPPAPVPIPLPPKVLENGPNAWGGEDRLNKRKRRRMEAVGIFVMDGKKYQKFWDKNSGDVFYEEVHNSTDEWECLRAGDPADFDPETGIQCGHVTIEDKVYNVFTSPSGRRFLVPANPENRRIQWEAARLSVEQALGMMNVDGELTAKELEKLKRIIDKLQGLTKEQCLNCPPVAPAVVAAAWLLLRQRKNFTTGPSPDLTKWPVRLSRTRSSTTNIRLPNRLMVVLCSCAPLFLRLMSSPALMHLLSYLPATGRETLGLMARFGILRPRPPKRKSHLVRKYRLVTLGAVTHLKLVSLISCTLLGATLSGKEFYRIQGLETYLTEPPVTLEAQCMRLPASRPMLLRLMGVPSWPQPCPPVLSCMYRPFQRPSLIILILGLTALNSQSTVVRMLLGTSPNTICPPKALFCLEFFALCGSTCLPMWVSARPFIGLPLTLPRILWLEMGTDFQPRIFRASLKSTFCAHRLCEKTGKLLLLVPSRSSIVGRRRLGQYLALITLRWPTGQRVVLPRASKRHSTRPSPSEKTNLRNYILQFAGALKLILHPAIDPHLQLSAGSLPIFFMNSPVLKSIYRRTCLTAVTTYWLRSPARLREAACRLATRLPPCQTPFTAYMHSTWCSVTLKVVTLMAFCFCKTSSLRTCSRFNPSSIQTTSCCMPSLPPCQITTGGLNITLCVSKRTQRRQPQTRHHFVAGMGVSSLTVTGFLRPSPTIRQAMSLNTTPRRLQYLWTAVLVSMILSGLKSSWLVRSAPARTVTASQARRSSCPCGKNSGPIMKGRSPECAGTAEPRLRTPLPVASTSVLTTPISTSIVLSSGVATRRVLALVVSVNLPWEKAQVLWMRCNKSRISLRGLSCMWSRVSPLLTQVDTKLAADSPLGVASGETKLTCQTVIMPVPPCSPLVKRSTWSLSPPTCCAAGSSSVPPALGKHTGSSNRSRMVMSFTRQLTRPCLTLGLWGCAGSTSQRVRRCNSLPPLVPARGFASWPAVGVLVRIPFWTKQRIAITLMSGFLAKPPLPAEISNNSTRWVLTLIAMFLTSCLRPNRPSGDSDRISVMPSNQITGTNLCPWSTQPVPRWTNLSGMGKSSPPTTGTERTAPSLSTPVKVPHLMWLHCICPLKIHSTGNEPLLLSPGQDMQSSCMTHTGNCRACLIFLRKAHPSTSQCSVTSSSYIEITKNARLLRLAMEINSGLQTSALILSAPFVQIWKGRAPRSPKLHITWGSISHLIHSLLNSQQNSHPTGPWQPRTMKSGLIGWLPAFAPSINIAARALVQAIWWAPRCFAPQGLCHTTSQNLLGARLKCFLRQSSAPAELRIAGSTSMIGSEKLLSPSHMPSLATSKALPVGDVITSPPDTFRASFLRNQLRSGFLAPEKLQRQFAHQMCTSQILKRTSTQRPSPSAGKCWILEKSDWSGKTRRPIFNLKAAISPGINLQATPHTSEFLLILQCIWTPAWALPFATGGLLGPPIGELTSRSPLMITVPKSFCLVHTMVKCLQGTKFWRARSSRLTTQGTNTLGDLNRIQRICTSLLGMVRTGRIIMKRFGRARKGKFIRLLPPASFIFPRALSLNQLATEMKWGLCRASLTKLVNFLWMLSRNFWCPLLISSYFWPFCLASPSPAGWWSFASDWFAPRYSVRALPFTLSNYRRSYEAFLSQCQVDIPTWGVKHPLGILWHHKVSTLIDEMVSRRMYRIMEKAGQAAWKQVVSEATLSRISNLDVVAHFQHLAAIEAETYKYLASRLPMLHNLRMTGSNVTIVYNSTLNQVFAIFPTSGSRPRLHDSQQWLIAVHSSIFSSVVASCTLFVVLWLRIPMLRSVFGFRWLGAIFLLNSRITRCVRLASPGRPLLRSMNPVGLFGAGGMTDAVRTTMTNGSWFRLASAKATPVFTPGWRSCHSATRPSSIPRYLGGTVKFMLTSRTNSFAPSTTGRTPPCLAMTTFQPYFRPTTNIRSTAVIGFTNGCAPSFPLGWFMFRGFSGVRLQAMFQFKSFRHQDQHYRSIRLCCPPGHQLPVWRLAPSDGSQELSVPHGDRDTRVHHHHSQCHRELFTFFSPHAFLLPFLCFDEKGIQSGIWQCVRHRGCVCLYQLRPTCQGVHPTLLGSRSCATASFHDTDHEVGNRFSLSFCHPTGNLNVQVCWGNAPRAVTRNCFLCGVSCRSVLLCSSTPAATAALIFSFITRYVSMAQIGWQKDLTGQWRLLSFFLCLTLFPMEHSPPAIFLTRLVSLCPPPGSITGGMSVVSMRSVLWLRFASSLGLRRTACPGATLVLDTPTSFWTLRADSIVGGRPLLRKGVRLKSRVTSTSKELCLMVPWQPLPEFQRNNGVVSRRLLPHGSTKGAFGVFHYLYASDDICSKGKSRPTARASAPFDLPELCFYLRVHDIRALSEHKGRAHYGGSSCTSLGGVLSHRNLEIHHLQMPFVLARPQVHSGPCPPRRKCRGLSSDCGKPRICRPASRLHYGRHIGARVEKPRVGWQKSCTGSGKPCQICQITTASSKRERRGTASQSISCARCWVRSSPNKTSPEARDRGRKIIREARRSPIFLRLKKMSGTTSPLVSGNCVCRRSRLPLTRAPGHVPCQIQGGVTLWSLVCRRIILCASASQHHPQHDELAFFGHLGVMIGRMCGEWHLTLCLVTYSIRATVWGSLIGENHAAAIKKKKKKKK'], ['>ORF2_GP2', 'MKWGLCKASLTKLANFLWMLSRSFWCPLLISSYFWPFCLASQSPVGWWSFASDWFAPRYSVRALPFTLSNYRRSYEAFLSQCQVDIPTWGVKHPLGVLWHHKVSTLIDEMVSRRMYRIMEKAGQAAWKQVVSEATLSRISGLDVVAHFQHLAAIEAETCKYLASRLPMLHNLRLTGSNVTIVYNSTLDQVFAIFPTPGSRPKLHDFQQWLIAVHSSIFSSVAASCTLFVVLWLRIPMLRSVFGFRWLGATFLLNSW']]
ex_file = open("newTempFile112233.fasta", "w")
for items in sample_tList:
ex_file.write(items[0] + "\n")
ex_file.write(items[1] + "\n")
ex_file.close()
in_file = '.../msa_example.fasta'
mafft_exe = '/usr/local/bin/mafft'
mafft_cline = MafftCommandline(mafft_exe, input=in_file) #have to change file path
#mafft_cline = MafftCommandline(mafft_exe, input=in_file, localpair=True, lexp=-1.5, lop=0.5)
stdout, stderr = mafft_cline()
print(stdout)
test_align = AlignIO.read(io.StringIO(stdout), "fasta")
#print(test_align)
os.remove("newTempFile112233.fasta")
print('Total time = ' + str(time() - startTime))
startTime = time()
matrix = matlist.blosum62
pWise_align = pairwise2.align.localds(sample_tList[0][1], sample_tList[1][1], matrix, -6, -1)
print(format_alignment(*pWise_align[0]))
print('Total time = ' + str(time() - startTime))
I've attempted to change the MAFFT command line alignment algorithm by referencing the help document (http://mafft.cbrc.jp/alignment/software/manual/manual.html). I don't get any error messages, but the alignment output does not change. I'm unsure what adjustments need to be made. I believe that by increasing the gap extension penalty (which is zero by default), the alignment will be improved. I haven't been able to find many documentation examples where custom variables are used when using MAFFT command line on this forum or through Google search. Help is much appreciated. For reference, documentation on the Pairwise2 alignment parameters can be found here: http://biopython.org/DIST/docs/api/Bio.pairwise2-module.html
Managed to figure out a possible solution. The alignment of the example sequences provided results in a long terminal/end gap which should not be present. Changing the MAFFT alignment algorithm using localpair, lexp, and lop had no effect (causing me a good deal of confusion). However, I have noticed differences in the alignment output when each input sequence is reversed. Oddly, the only way I was able to remove the terminal/end gap was to set the lop (gap opening penalty) to a lesser amount relative to lexp (gap extension penalty). I suspect my solution is niche and may not be applicable to other similar occurrences of terminal gaps. Changing the alignment settings also likely reduces the optimal alignment.
Going forward, I plan to use an automated process to run alignments of consensus sequences to raw sequences. In the event I detect irregularities with the alignment output (specifically terminal gaps), I'll attempt to reverse the input sequences and apply custom alignment settings. I suppose if that isn't a consistent solution, I'll figure out a way to refine the alignment output directly.
For anyone curious, I used a lexp value of -1.5 and lop value of 0.5 (now included in a hashed out line in my example code).
I have a Raspberry on which I want to create a timelapse movie.
All examples I see in the internet FIRST save a bunch of images and THEN converts them into a movie all at once.
I want to create a movie over a long period of time so I can't save thousands of images. What I need is a tool that adds an image to a movie right after the image is captured.
Is there a chance to do that?
There's a flaw in your logic, I think - by adding each image to the movie, you would necessarily be adding a full-frame, rather than only a diff frame. This will result in higher quality, sure - but it will also not save you anything in terms of space as compared to saving the entire image. The space savings you see in adding things to movies is all about that diff, rather than storing a full frame.
Doing a partial diff with check-frames at increments might work, but I'm not sure what format you're targeting, nor what codexes would be needed in order to arbitrarily tack on either a diff frame or a full frame, depending on some external condition - encoding usually takes place as a series of operations rather than singly.
An answer but it isn't finished!
I need your help making this perfect!
Running in python2
import os, cv2
from picamera import PiCamera
from picamera.array import PiRGBArray
from datetime import datetime
from time import sleep
now = datetime.now()
x = now.strftime("%Y")+"-"+now.strftime("%m")+"-"+now.strftime("%d")+"-"+now.strftime("%H")+"-"+now.strftime("%M") #string of dateandtimestart
def main():
imagenum = 100 #how many images
period = 1 #seconds between images
os.chdir ("/home/pi/t_lapse")
os.mkdir(x)
os.chdir(x)
filename = x + ".avi"
camera = PiCamera()
camera.resolution=(1920,1088)
camera.vflip = True
camera.hflip = True
camera.color_effects = (128,128) #makes a black and white image for IR camera
sleep(0.1)
out = cv2.VideoWriter(filename, cv2.cv.CV_FOURCC(*'XVID'), 30, (1920,1088))
for c in range(imagenum):
with PiRGBArray(camera, size=(1920,1088)) as output:
camera.capture(output, 'bgr')
imagec = output.array
out.write(imagec)
output.truncate(0) #trying to get more than 300mb files..
pass
sleep(period-0.5)
camera.close()
out.release()
if __name__ == '__main__':
main()
I've got this configured with with a few buttons and an OLED to select time spacing and frame numbers displayed on a OLED (code not shown above for simplicity but it is also here: https://github.com/gchennell/RPi-PiLapse )
This doesn't make videos larger than 366Mb which is some sort of limit I've reached and I don't know why - if anyone has a good suggestion I would appreciate it
I'm looking at implementing a Caffe CNN which accepts two input images and a label (later perhaps other data) and was wondering if anyone was aware of the correct syntax in the prototxt file for doing this? Is it simply an IMAGE_DATA layer with additional tops? Or should I use separate IMAGE_DATA layers for each?
Thanks,
James
Edit: I have been using the HDF5_DATA layer lately for this and it is definitely the way to go.
HDF5 is a key value store, where each key is a string, and each value is a multi-dimensional array. Thus, to use the HDF5_DATA layer, just add a new key for each top you want to use, and set the value for that key to store the image you want to use. Writing these HDF5 files from python is easy:
import h5py
import numpy as np
filelist = []
for i in range(100):
image1 = get_some_image(i)
image2 = get_another_image(i)
filename = '/tmp/my_hdf5%d.h5' % i
with hypy.File(filename, 'w') as f:
f['data1'] = np.transpose(image1, (2, 0, 1))
f['data2'] = np.transpose(image2, (2, 0, 1))
filelist.append(filename)
with open('/tmp/filelist.txt', 'w') as f:
for filename in filelist:
f.write(filename + '\n')
Then simply set the source of the HDF5_DATA param to be '/tmp/filelist.txt', and set the tops to be "data1" and "data2".
I'm leaving the original response below:
====================================================
There are two good ways of doing this. The easiest is probably to use two separate IMAGE_DATA layers, one with the first image and label, and a second with the second image. Caffe retrieves images from LMDB or LEVELDB, which are key value stores, and assuming you create your two databases with corresponding images having the same integer id key, Caffe will in fact load the images correctly, and you can proceed to construct your net with the data/labels of both layers.
The problem with this approach is that having two data layers is not really very satisfying, and it doesn't scale very well if you want to do more advanced things like having non-integer labels for things like bounding boxes, etc. If you're prepared to make a time investment in this, you can do a better job by modifying the tools/convert_imageset.cpp file to stack images or other data across channels. For example you could create a datum with 6 channels - the first 3 for your first image's RGB, and the second 3 for your second image's RGB. After reading this in using the IMAGE_DATA layer, you can split the stream into two images using a SLICE layer with a slice_point at index 3 along the slice_dim = 1 dimension. If further down the road, you decide that you want to load even more complex assortments of data, you'll understand the encoding scheme and can write your own decoding layer based off of src/caffe/layers/data_layer.cpp to gain full control of the pipeline.
You may also consider using HDF5_DATA layer with multiple "top"s