Read image file pixel value and write it to a data frame, saved it to csv, when I reload, Image can't show - image

First I open images and get their pixel value, then write to a dataframe.
skin_df_balanced['pix'] = skin_df_balanced['path'].map(lambda x: np.asarray(Image.open(x).resize((SIZE,SIZE))))
Then I saved this dataframe to csv:
skin_df_balanced.to_csv('./skin_df_balanced_pixel_value_500.csv')
But when I reload the csv and tried to plot the images:
test = pd.read_csv('./skin_df_balanced_pixel_value_500.csv',index_col=0)
n_samples = 5
fig, m_axs = plt.subplots(7, n_samples, figsize = (4*n_samples, 3*7))
for n_axs, (type_name, type_rows) in zip(m_axs, test.sort_values(['type']).groupby('type')):
n_axs[0].set_title(type_name)
for c_ax, (_, c_row) in zip(n_axs, type_rows.sample(n_samples, random_state=1234).iterrows()):
c_ax.imshow(c_row['pix'])
c_ax.axis('off')
I found this error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-26-abe099827f62> in <module>()
5 n_axs[0].set_title(type_name)
6 for c_ax, (_, c_row) in zip(n_axs, type_rows.sample(n_samples, random_state=1234).iterrows()):
----> 7 c_ax.imshow(c_row['pix'])
8 c_ax.axis('off')
4 frames
/usr/local/lib/python3.7/dist-packages/matplotlib/image.py in set_data(self, A)
692 not np.can_cast(self._A.dtype, float, "same_kind")):
693 raise TypeError("Image data of dtype {} cannot be converted to "
--> 694 "float".format(self._A.dtype))
695
696 if not (self._A.ndim == 2
TypeError: Image data of dtype <U629 cannot be converted to float

Related

Could ruamel.yaml support type descriptor like "num: !!float 4"?

I am learning using ruamel.yaml, and I am wondering whether it supports type descriptor as the original YAML like "num: !!float 4"?
The file is like:
num: !!float 4
I tried import a file like this, but met an error:
---------------------------------------------------------------------------<br>
ValueError Traceback (most recent call last)
Input In [22], in <cell line: 2>()
1 from ruamel import yaml
2 with open("net.yaml", "r", encoding="utf-8") as yaml_file:
----> 3 yaml_dict = yaml.round_trip_load(yaml_file)
4 yaml_dict
...
File ~/software/python/anaconda/anaconda3/envs/conda-general/lib/python3.10/site-packages/ruamel/yaml/constructor.py:1469, in RoundTripConstructor.construct_mapping(self, node, maptyp, deep)
1462 if not isinstance(key, Hashable):
1463 raise ConstructorError(
1464 'while constructing a mapping',
1465 node.start_mark,
1466 'found unhashable key',
1467 key_node.start_mark,
1468 )
-> 1469 value = self.construct_object(value_node, deep=deep)
1470 if self.check_mapping_key(node, key_node, maptyp, key, value):
1471 if key_node.comment and len(key_node.comment) > 4 and key_node.comment[4]:
File ~/software/python/anaconda/anaconda3/envs/conda-general/lib/python3.10/site-packages/ruamel/yaml/constructor.py:146, in BaseConstructor.construct_object(self, node, deep)
142 # raise ConstructorError(
143 # None, None, 'found unconstructable recursive node', node.start_mark
144 # )
145 self.recursive_objects[node] = None
--> 146 data = self.construct_non_recursive_object(node)
148 self.constructed_objects[node] = data
149 del self.recursive_objects[node]
File ~/software/python/anaconda/anaconda3/envs/conda-general/lib/python3.10/site-packages/ruamel/yaml/constructor.py:181, in BaseConstructor.construct_non_recursive_object(self, node, tag)
179 constructor = self.__class__.construct_mapping
180 if tag_suffix is None:
--> 181 data = constructor(self, node)
182 else:
183 data = constructor(self, tag_suffix, node)
File ~/software/python/anaconda/anaconda3/envs/conda-general/lib/python3.10/site-packages/ruamel/yaml/constructor.py:1271, in RoundTripConstructor.construct_yaml_float(self, node)
1259 return ScalarFloat(
1260 sign * float(value_s),
1261 width=width,
(...)
1268 anchor=node.anchor,
1269 )
1270 width = len(value_so)
-> 1271 prec = value_so.index('.') # you can use index, this would not be float without dot
1272 lead0 = leading_zeros(value_so)
1273 return ScalarFloat(
1274 sign * float(value_s),
1275 width=width,
(...)
1279 anchor=node.anchor,
1280 )
ValueError: substring not found
Why do I get this error, and how do I get rid of it?
That is a bug in ruamel.yaml<=0.17.21. The comment on the offending line (1271) says
# you can use index, this would not be float without dot
Obviously the author of that comment didn't know what he was talking about, as in your case, when using !!float 4 you have a float without a dot...
It is trivial to "fix" that by replacing index with find in line 1271, and when doing so that will load your document and you can dump the data.
But the corresponding representer for dumping doesn't cope with that outputs the float as 4.0, dropping the tag.
You could temporarily fix this by registering a simpler float constructor (e.g. the simple one from the SafeLoader), although this will affect all floats:
import sys
import ruamel.yaml
yaml_str = """\
num: !!float 4
"""
yaml = ruamel.yaml.YAML()
yaml.constructor.add_constructor(
'tag:yaml.org,2002:float', ruamel.yaml.constructor.SafeConstructor.construct_yaml_float
)
data = yaml.load(yaml_str)
yaml.dump(data, sys.stdout)
which gives:
num: 4.0

Problem using spacy tokenizer for count vectorizer

I'm trying to do sentiment analysis on Amazon product reviews using the Spacy module for preprocessing the text data. The code I'm using is exactly this. I modified the dataset that I'm using according to what's shown in the link. I'm getting the error:
TypeError Traceback (most recent call last)
<ipython-input-139-bcbf2d3c9cce> in <module>
4 ('classifier', classifier)])
5 # Fit our data
----> 6 pipe_countvect.fit(X_train,y_train)
7 # Predicting with a test dataset
8 sample_prediction = pipe_countvect.predict(X_test)
~\.conda\envs\py36\lib\site-packages\sklearn\pipeline.py in fit(self, X, y, **fit_params)
328 """
329 fit_params_steps = self._check_fit_params(**fit_params)
--> 330 Xt = self._fit(X, y, **fit_params_steps)
331 with _print_elapsed_time('Pipeline',
332 self._log_message(len(self.steps) - 1)):
~\.conda\envs\py36\lib\site-packages\sklearn\pipeline.py in _fit(self, X, y, **fit_params_steps)
294 message_clsname='Pipeline',
295 message=self._log_message(step_idx),
--> 296 **fit_params_steps[name])
297 # Replace the transformer of the step with the fitted
298 # transformer. This is necessary when loading the transformer
~\.conda\envs\py36\lib\site-packages\joblib\memory.py in __call__(self, *args, **kwargs)
350
351 def __call__(self, *args, **kwargs):
--> 352 return self.func(*args, **kwargs)
353
354 def call_and_shelve(self, *args, **kwargs):
~\.conda\envs\py36\lib\site-packages\sklearn\pipeline.py in _fit_transform_one(transformer, X, y, weight, message_clsname, message, **fit_params)
738 with _print_elapsed_time(message_clsname, message):
739 if hasattr(transformer, 'fit_transform'):
--> 740 res = transformer.fit_transform(X, y, **fit_params)
741 else:
742 res = transformer.fit(X, y, **fit_params).transform(X)
~\.conda\envs\py36\lib\site-packages\sklearn\feature_extraction\text.py in fit_transform(self, raw_documents, y)
1197
1198 vocabulary, X = self._count_vocab(raw_documents,
-> 1199 self.fixed_vocabulary_)
1200
1201 if self.binary:
~\.conda\envs\py36\lib\site-packages\sklearn\feature_extraction\text.py in _count_vocab(self, raw_documents, fixed_vocab)
1108 for doc in raw_documents:
1109 feature_counter = {}
-> 1110 for feature in analyze(doc):
1111 try:
1112 feature_idx = vocabulary[feature]
~\.conda\envs\py36\lib\site-packages\sklearn\feature_extraction\text.py in _analyze(doc, analyzer, tokenizer, ngrams, preprocessor, decoder, stop_words)
104 doc = preprocessor(doc)
105 if tokenizer is not None:
--> 106 doc = tokenizer(doc)
107 if ngrams is not None:
108 if stop_words is not None:
TypeError: 'str' object is not callable
I'm not sure what's causing this error and how to get rid of it. I'm pretty sure the count vectorizer produces a sparse matrix and not a string one. One thing that I've considered is that I'm using the spacy tokenizer, which was used in the link as vectorizer = CountVectorizer(tokenizer = spacy_tokenizer, ngram_range=(1,1)) but when I ran the program it was saying that spacy_tokenizer was undefined. So I used vectorizer = CountVectorizer(tokenizer = 'spacy', ngram_range=(1,1)) instead. But if I remove this then I don't know how to use the spacy tokenizer, and either way I am not certain that this was indeed the cause of the problem. Please help me out!
The error comes at this line:
doc = tokenizer(doc)
Since it says 'str' is not callable and the only thing being called here is the tokenizer object, it looks like your tokenizer is a string for some reason.
Based on the code you linked it looks like the spacy_tokenizer object is being configured incorrectly. But that variable isn't defined anywhere in the code despite being passed as an option, so the code you linked to looks like it can't possibly run.
It would help if you could make a minimal example that you could actually paste in the question here.

JSONDecodeError: Unexpected UTF-8 BOM (decode using utf-8-sig): line 1 column 1 (char 0) ---While Tuning gpt2.finetune

Hope you all are doing good ,
I am working on fine tuning GPT 2 model to generate Title based on the content ,While working on it ,I have created a simple CSV files containing only the title to train the model , But while inputting this model to GPT 2 for fine tuning I am getting the following ERROR ,
JSONDecodeError Traceback (most recent call last)
in ()
10 steps=1000,
11 save_every=200,
---> 12 sample_every=25) # steps is max number of training steps
13
14 # gpt2.generate(sess)
3 frames
/usr/lib/python3.7/json/__init__.py in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
336 if s.startswith('\ufeff'):
337 s = s.encode('utf8')[3:].decode('utf8')
--> 338 # raise JSONDecodeError("Unexpected UTF-8 BOM (decode using utf-8-sig)",
339 # s, 0)
340 else:
JSONDecodeError: Unexpected UTF-8 BOM (decode using utf-8-sig): line 1 column 1 (char 0)
Below is my code for the above :
import gpt_2_simple as gpt2
model_name = "120M" # "355M" for larger model (it's 1.4 GB)
gpt2.download_gpt2(model_name=model_name) # model is saved into current directory under /models/117M/
sess = gpt2.start_tf_sess()
gpt2.finetune(sess,
'titles.csv',
model_name=model_name,
steps=1000,
save_every=200,
sample_every=25) # steps is max number of training steps
I have tried all the basic mechanism of handing UTF -8 BOM but did not find any luck ,Hence requesting your help .It would be a great help from you all .
Try changing the model name because i see you input 120M and the gpt2 model is called 124M

Reading hdf file - https and Xarray

I am trying to read hdf files, over https connection, from the Harmonized Landsat Sentinel repository (here: https://hls.gsfc.nasa.gov/data/v1.4/
Ideally, I would use xarray to do this. Here is an example:
Example of https:
xr.open_rasterio('https://hls.gsfc.nasa.gov/data/v1.4/S30/2017/13/T/E/F/HLS.S30.T13TEF.2017002.v1.4.hdf')
<xarray.DataArray (band: 1, y: 3660, x: 3660)>
[13395600 values with dtype=int16]
Coordinates:
* band (band) int64 1
* y (y) float64 4.6e+06 4.6e+06 4.6e+06 ... 4.49e+06 4.49e+06 4.49e+06
* x (x) float64 5e+05 5e+05 5.001e+05 ... 6.097e+05 6.097e+05 6.098e+05
Attributes:
transform: (30.0, -0.0, 499980.0, -0.0, -30.0, 4600020.0)
crs: +init=epsg:32613
res: (30.0, 30.0)
is_tiled: 0
nodatavals: (nan,)
scales: (1.0,)
offsets: (0.0,)
bands: 1
byte_order: 0
coordinate_system_string: PROJCS["UTM_Zone_13N",GEOGCS["GCS_WGS_1984",DA...
data_type: 2
description: HDF Imported into ENVI.
file_type: HDF Scientific Data
header_offset: 0
interleave: bsq
lines: 3660
samples: 3660
Note these files have multiple datasets/bands, so the above is incorrect.
xr.open_dataset('https://hls.gsfc.nasa.gov/data/v1.4/S30/2017/13/T/E/F/HLS.S30.T13TEF.2017002.v1.4.hdf')
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
/opt/conda/lib/python3.7/site-packages/xarray/backends/file_manager.py in _acquire_with_cache_info(self, needs_lock)
194 try:
--> 195 file = self._cache[self._key]
196 except KeyError:
/opt/conda/lib/python3.7/site-packages/xarray/backends/lru_cache.py in __getitem__(self, key)
42 with self._lock:
---> 43 value = self._cache[key]
44 self._cache.move_to_end(key)
KeyError: [<class 'netCDF4._netCDF4.Dataset'>, ('https://hls.gsfc.nasa.gov/data/v1.4/S30/2017/13/T/E/F/HLS.S30.T13TEF.2017002.v1.4.hdf',), 'r', (('clobber', True), ('diskless', False), ('format', 'NETCDF4'), ('persist', False))]
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-85-7765ae565af3> in <module>
----> 1 xr.open_dataset('https://hls.gsfc.nasa.gov/data/v1.4/S30/2017/13/T/E/F/HLS.S30.T13TEF.2017002.v1.4.hdf')
/opt/conda/lib/python3.7/site-packages/xarray/backends/api.py in open_dataset(filename_or_obj, group, decode_cf, mask_and_scale, decode_times, autoclose, concat_characters, decode_coords, engine, chunks, lock, cache, drop_variables, backend_kwargs, use_cftime)
497 if engine == "netcdf4":
498 store = backends.NetCDF4DataStore.open(
--> 499 filename_or_obj, group=group, lock=lock, **backend_kwargs
500 )
501 elif engine == "scipy":
/opt/conda/lib/python3.7/site-packages/xarray/backends/netCDF4_.py in open(cls, filename, mode, format, group, clobber, diskless, persist, lock, lock_maker, autoclose)
387 netCDF4.Dataset, filename, mode=mode, kwargs=kwargs
388 )
--> 389 return cls(manager, group=group, mode=mode, lock=lock, autoclose=autoclose)
390
391 def _acquire(self, needs_lock=True):
/opt/conda/lib/python3.7/site-packages/xarray/backends/netCDF4_.py in __init__(self, manager, group, mode, lock, autoclose)
333 self._group = group
334 self._mode = mode
--> 335 self.format = self.ds.data_model
336 self._filename = self.ds.filepath()
337 self.is_remote = is_remote_uri(self._filename)
/opt/conda/lib/python3.7/site-packages/xarray/backends/netCDF4_.py in ds(self)
396 #property
397 def ds(self):
--> 398 return self._acquire()
399
400 def open_store_variable(self, name, var):
/opt/conda/lib/python3.7/site-packages/xarray/backends/netCDF4_.py in _acquire(self, needs_lock)
390
391 def _acquire(self, needs_lock=True):
--> 392 with self._manager.acquire_context(needs_lock) as root:
393 ds = _nc4_require_group(root, self._group, self._mode)
394 return ds
/opt/conda/lib/python3.7/contextlib.py in __enter__(self)
110 del self.args, self.kwds, self.func
111 try:
--> 112 return next(self.gen)
113 except StopIteration:
114 raise RuntimeError("generator didn't yield") from None
/opt/conda/lib/python3.7/site-packages/xarray/backends/file_manager.py in acquire_context(self, needs_lock)
181 def acquire_context(self, needs_lock=True):
182 """Context manager for acquiring a file."""
--> 183 file, cached = self._acquire_with_cache_info(needs_lock)
184 try:
185 yield file
/opt/conda/lib/python3.7/site-packages/xarray/backends/file_manager.py in _acquire_with_cache_info(self, needs_lock)
199 kwargs = kwargs.copy()
200 kwargs["mode"] = self._mode
--> 201 file = self._opener(*self._args, **kwargs)
202 if self._mode == "w":
203 # ensure file doesn't get overriden when opened again
netCDF4/_netCDF4.pyx in netCDF4._netCDF4.Dataset.__init__()
netCDF4/_netCDF4.pyx in netCDF4._netCDF4._ensure_nc_success()
OSError: [Errno -90] NetCDF: file not found: b'https://hls.gsfc.nasa.gov/data/v1.4/S30/2017/13/T/E/F/HLS.S30.T13TEF.2017002.v1.4.hdf'
When read from disc:
xr.open_rasterio('HLS.S30.T13TEF.2017002.v1.4.hdf')
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-88-f4ae5075928a> in <module>
----> 1 xr.open_rasterio('HLS.S30.T13TEF.2017002.v1.4.hdf')
/opt/conda/lib/python3.7/site-packages/xarray/backends/rasterio_.py in open_rasterio(filename, parse_coordinates, chunks, cache, lock)
250 # Get bands
251 if riods.count < 1:
--> 252 raise ValueError("Unknown dims")
253 coords["band"] = np.asarray(riods.indexes)
254
ValueError: Unknown dims
and
xr.open_dataset('/home/rowangaffney/Desktop/HLS.S30.T13TEF.2017002.v1.4.hdf')
<xarray.Dataset>
Dimensions: (XDim_Grid: 3660, YDim_Grid: 3660)
Dimensions without coordinates: XDim_Grid, YDim_Grid
Data variables:
B01 (YDim_Grid, XDim_Grid) float32 ...
B02 (YDim_Grid, XDim_Grid) float32 ...
B03 (YDim_Grid, XDim_Grid) float32 ...
B04 (YDim_Grid, XDim_Grid) float32 ...
B05 (YDim_Grid, XDim_Grid) float32 ...
B06 (YDim_Grid, XDim_Grid) float32 ...
B07 (YDim_Grid, XDim_Grid) float32 ...
B08 (YDim_Grid, XDim_Grid) float32 ...
B8A (YDim_Grid, XDim_Grid) float32 ...
B09 (YDim_Grid, XDim_Grid) float32 ...
B10 (YDim_Grid, XDim_Grid) float32 ...
B11 (YDim_Grid, XDim_Grid) float32 ...
B12 (YDim_Grid, XDim_Grid) float32 ...
QA (YDim_Grid, XDim_Grid) float32 ...
Attributes:
PRODUCT_URI: S2A_MSIL1C_20170102T17...
L1C_IMAGE_QUALITY: SENSOR:PASSED GEOMETRI...
SPACECRAFT_NAME: Sentinel-2A
TILE_ID: S2A_OPER_MSI_L1C_TL_SG...
DATASTRIP_ID: S2A_OPER_MSI_L1C_DS_SG...
PROCESSING_BASELINE: 02.04
SENSING_TIME: 2017-01-02T17:58:23.575Z
L1_PROCESSING_TIME: 2017-01-02T21:41:37.84...
HORIZONTAL_CS_NAME: WGS84 / UTM zone 13N
HORIZONTAL_CS_CODE: EPSG:32613
NROWS: 3660
NCOLS: 3660
SPATIAL_RESOLUTION: 30
ULX: 499980.0
ULY: 4600020.0
MEAN_SUN_ZENITH_ANGLE(B01): 65.3577462333765
MEAN_SUN_AZIMUTH_ANGLE(B01): 165.01162242158
MEAN_VIEW_ZENITH_ANGLE(B01): 8.10178275092502
MEAN_VIEW_AZIMUTH_ANGLE(B01): 285.224586475702
spatial_coverage: 89
cloud_coverage: 72
ACCODE: LaSRCS2AV3.5.5
arop_s2_refimg: NONE
arop_ncp: 0
arop_rmse(meters): 0.0
arop_ave_xshift(meters): 0.0
arop_ave_yshift(meters): 0.0
HLS_PROCESSING_TIME: 2018-02-24T18:17:49Z
NBAR_Solar_Zenith: 44.82820466504637
AngleBand: [ 0 1 2 3 4 5 6 ...
MSI band 01 bandpass adjustment slope and offset: 0.995900, -0.000200
MSI band 02 bandpass adjustment slope and offset: 0.977800, -0.004000
MSI band 03 bandpass adjustment slope and offset: 1.005300, -0.000900
MSI band 04 bandpass adjustment slope and offset: 0.976500, 0.000900
MSI band 8a bandpass adjustment slope and offset: 0.998300, -0.000100
MSI band 11 bandpass adjustment slope and offset: 0.998700, -0.001100
MSI band 12 bandpass adjustment slope and offset: 1.003000, -0.001200
StructMetadata.0: GROUP=SwathStructure\n.
Any idea on best practices for reading these data over https?
Thanks!
I recommend reading http://matthewrocklin.com/blog/work/2018/02/06/hdf-in-the-cloud to understand why it's not as easy as it seems (to access HDF5 files directly from https). So not exactly a solution, but you'll probably need to download the data and load it from there (in the short term at least).
Oh, and you might want to try using the 'h5netcdf' engine to read the file instead:
xr.open_dataset("HLS.S30.T13TEF.2017002.v1.4.hdf", engine="h5netcdf")
and if you're interested in just one band, do something like this:
xr.open_dataset("HLS.S30.T13TEF.2017002.v1.4.hdf", engine="h5netcdf", group="B01")
Just a note for others though, the below code would work in some cases if you use xarray with the 'h5netcdf' engine, have installed the 'h5pyd' library and the URL is stored on a HDF REST API interface:
xr.open_dataset(
"https://hls.gsfc.nasa.gov/data/v1.4/S30/2017/13/T/E/F/HLS.S30.T13TEF.2017002.v1.4.hdf",
engine="h5netcdf",
)
But unfortunately, that's not quite the case with these NASA datasets...

Encapsulated poscript max line limit

I think that I'm exceeding EPS max line limits:
I'm generating an eps programmatically that consists of a grid of pictures.
My EPS has this structure:
%!PS-Adobe-3.0 EPSF-3.0
.
.
%%BeginProlog
%%EndProlog
%%Page: 1 1
%%Begin Raster Image. Index: 0
save
449 2576 translate
0 rotate
-282 -304 translate
[1 0 0 1 0 0] concat
0 0 translate
[1 0 0 1 0 0] concat
0 0 translate
userdict begin
DisplayImage
0 0
564 608
12
564 608
0
0
FBDBB9FBDCBCFDDBBAFFD8B2FFD7A9FED4A1FCD29CFDD09EFED0A2FFD0A6FFCDA3FFCBA0FFCBA0...
EED79CEBD09CEDD19EEED2A1EFD3A3F0D4A5F0D4A6F0D4A7F1D4A4F3D4A0F3D49F
end
restore
%%End Raster Image
%%Begin Raster Image. Index: 1
.
.
end
restore
%%End Raster Image
%%Begin Raster Image. Index: 2
etc
So the thing is if I write up to 4 images to the EPS, everything works fine, but when I try to write a 5th, the eps won't open on any EPS viewer including Adobe Illustrator (the operation cannot complete because of an unknown error).
I tried using different images to make sure that the particular images were ok and I got the same result, as long as I'm writing 4 images (105825 lines file) everything works. But when I use 5 (132253 lines file) it fails.
Is it possible that I'm exceeding the maximum line limit for EPS?
This are the files in question if you'd like to analyze them:
the one that works - >https://files.fm/u/bfn2d32m and the one that doesn't ->
https://files.fm/u/4gbybr3y
There's no 'line limit' in PostScript or EPS, so you can't be hitting that.
When I run your file through Ghostscript it throws an error /undefined in yImage (I'd suggest you debug PostScript using a proper PostScript interpreter, not Adobe Illustrator).
This suggests to me that one of your images is using more data than you have supplied, so the interpreter runs off the end of the data, consuming parts of the program, until it has read sufficient bytes from currentfile to satisfy the request. At that point is starts processing the file as PostScript again, but the file pointer now points to the 'yImage' of the next 'DisplayImage'. Since you haven't defined a 'yImage' key, naturally this gives you an 'undefined' error.
From your description, this would seem likely to be the 4th image, since adding the 5th throws the error. Note that if your program terminates without supplying enough data (so the interpreter reaches EOF) then the data supplied will be drawn. So it might look like your 4th image is correct even when it isn't, provided it isn't followed by any further program code.
A style note; PostScript is a stack-based language, so normally one would proceed by pushing values onto the stack and reading them from there, instead of executing the 'token' operator.
So your input would be more like :
0 0
564 608
12
564 608
0
0
DisplayImage
FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
...
And the DisplayImage code would be :
/DisplayImage
{
%
% Display a DirectClass or PseudoClass image.
%
% Parameters:
% x & y translation.
% x & y scale.
% label pointsize.
% image label.
% image columns & rows.
% class: 0-DirectClass or 1-PseudoClass.
% compression: 0-none or 1-RunlengthEncoded.
% hex color packets.
%
gsave
/buffer 512 string def
/byte 1 string def
/color_packet 3 string def
/pixels 768 string def
/compression exch def
/class exch def
/rows exch def
/columns exch def
/pointsize exch def
scale
translate
This avoids you having to use token at all for the scale and translate operations for instance.

Resources