How to save {UINT16, 2} array to image in Julia - image

I have an Array{UInt16,2} in Julia of size 5328×3040. I want to save it to a png image.
I tried the following:
save("gray.png", colorview(Gray, img))
But got the following error:
ERROR: TypeError: in Gray, in T, expected T<:Union{Bool, AbstractFloat, FixedPoint}, got Type{UInt16}
Stacktrace:
[1] ccolor_number at C:\Users\ankushar\.julia\packages\ImageCore\KbJyT\src\convert_reinterpret.jl:60 [inlined]
[2] ccolor_number at C:\Users\ankushar\.julia\packages\ImageCore\KbJyT\src\convert_reinterpret.jl:57 [inlined]
[3] colorview(::Type{Gray}, ::Array{UInt16,2}) at C:\Users\ankushar\.julia\packages\ImageCore\KbJyT\src\colorchannels.jl:104
[4] top-level scope at REPL[16]:1
caused by [exception 3]
IOError: symlink: operation not permitted (EPERM)
I am using Julia 1.4.2
Can you suggest a good way to store these arrays as images in Julia?
TIA!

You can normalize the pixel values before saving.
using Images
img = rand(UInt16, 10, 20)
img[1:3]
# => 3-element Array{UInt16,1}:
0x7fc2
0x057e
0xae79
gimg = colorview(Gray, img ./ typemax(UInt16))
gimg[1:3] |> channelview
# => 3-element reinterpret(Float64, ::Array{Gray{Float64},1}):
0.4990615701533532
0.02145418478675517
0.6815442130159457
save("gray.png", gimg)

A faster and more accurate solution is to reinterpret your array as an array of N0f16 which is a type from FixedPointNumbers which is basically just a Uint16 scaled between 0 and 1. This will both avoid rounding errors, but also prevent the need for making a copy.
using FixedPointNumbers
img = rand(UInt16, 10, 20)
colorview(Gray, reinterpret(N0f16, img)))

Related

Change all images in training set

I have a convolutional neural network. And I wanted to train it on images from the training set but first they should be wrapped with my function change(tensor, float) that takes in a tensor/image of the form [hight,width,3] and a float.
Batch size =4
loading data
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size,
shuffle=True, num_workers=2)
Cnn architecture
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
#size of inputs [4,3,32,32]
#size of labels [4]
inputs = change(inputs,0.1) <----------------------------
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs) #[4, 10]
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print(f'[{epoch + 1}, {i + 1:5d}] loss: {running_loss / 2000:.3f}')
running_loss = 0.0
print('Finished Training')
I am trying to apply the image function change but it gives an object error.
it there a quick way to fix it?
I am using a Julia function but it works completely fine with other objects. Error message:
JULIA: MethodError: no method matching copy(::PyObject)
Closest candidates are:
copy(!Matched::T) where T<:SHA.SHA3_CTX at /opt/julia-1.7.2/share/julia/stdlib/v1.7/SHA/src/types.jl:213
copy(!Matched::T) where T<:SHA.SHA2_CTX at /opt/julia-1.7.2/share/julia/stdlib/v1.7/SHA/src/types.jl:212
copy(!Matched::Number) at /opt/julia-1.7.2/share/julia/base/number.jl:113
I would recommend to put change function to transforms list, so you do data changes on transformation stage.
partial from functools will help you to fix number of arguments, like this:
from functools import partial
def change(input, float):
pass
# Use partial to fix number of params, such that change accepts only input
change_partial = partial(change, float=pass_float_value_here)
# Add change_partial to a list of transforms before or after converting to tensors
transforms = Compose([
RandomResizedCrop(img_size), # example
# Add change_partial here if it operates on PIL Image
change_partial,
ToTensor(), # convert to tensor
# Add change_partial here if it operates on torch tensors
change_partial,
])

JPEG-in-TIF Conversion: Preserve identical array representation

I have a JPEG image. I would like to convert the image to a TIF with JPEG compression (i.e. a "JPEG-in-TIF"). I am using GDAL to do this.
I require that the array representations of the original JPEG and converted TIF are identical.
I have found a method to do this but I am trying to understand a.) why it appears to work and b.) whether it will fail or have significant additional downsides. (If there is an alternative, sure-fire method for accomplishing this task then that would also be a great answer.)
What I have found so far
I found that doing the following command results in a TIF where perhaps 5% of pixels are different (full code sample below):
gdal_translate in.jpg out.tif -co COMPRESS=JPEG
The GDAL docs say that this command will result in a "lossless" conversion. I take it that "lossless" here is not the same thing as a guarantee that the arrays will be identical?
Specifying the block sizes to be equal to the size of the image appears to work?
If I additionally specify block size arguments in the conversion command, then the resulting arrays are identical (in my sample images):
# Suppose that in.jpg is 400x245
gdal_translate in.jpg out.tif -co COMPRESS=JPEG -co BLOCKXSIZE=400 -co BLOCKYSIZE=245
My (very fuzzy) best-guess intuition right now is that specifying this large block size prevents compression of the individual blocks? However,
The docs state that lossless compression will occur "without decompression and compression cycles." So no compression should actually be occurring?
I am confused as to why gdal_translate is happy to accept BLOCKYSIZE=245 when 245 is not a multiple of 8?
The resulting out.tif has 400x245 block(s) for each band. This appears to negatively affect read time in the resulting image.
Block size is only necessary when SOURCE_COLOR_SPACE is not RGB?
In further experiments (see demo code) it appears that - if an image has RGB color space - then it is sufficient to specify -co COMPRESS=JPEG and nothing further in the conversion command.
Demo code
Here is some python code to demonstrate actual commands:
import os
from typing import Tuple
import requests
import numpy as np
import gdal
def read_image_gdal(filepath:str) -> np.ndarray:
try:
f = gdal.Open(filepath)
arr = f.ReadAsArray()
return arr
finally:
f = None
del f
def get_image_size(filepath:str) -> Tuple[int,int]:
try:
f = gdal.Open(filepath)
return f.RasterXSize, f.RasterYSize
finally:
f = None
del f
# Ex. 1 Medium-sized image (400x245) with YCbCr color space ~> TIF_0 is different
SAMPLE_JPEG_URL= 'https://jpeg.org/images/about.jpg'
# # Ex. 2 Small (50x50) image with RGB Colorspace ~> TIF_0 is the SAME
# SAMPLE_JPEG_URL = 'https://raw.githubusercontent.com/OSGeo/gdal/0402b86928e09860e6d24215b6f5611c31fa3c69/autotest/gdrivers/data/jpeg/rgbsmall_rgb.jpg'
# # Ex. 3 Small (50x50) image but has CMYK color space. ~> TIF_0 is different (But see e.g. https://gdal.org/drivers/raster/gtiff.html#raw-mode)
# SAMPLE_JPEG_URL = 'https://raw.githubusercontent.com/OSGeo/gdal/master/autotest/gdrivers/data/jpeg/rgb_ntf_cmyk.jpg'
SAMPLE_JPEG_FILEPATH = "sample_file.jpg"
# Download sample JPEG file from GDAL github
res = requests.get(SAMPLE_JPEG_URL)
with open(SAMPLE_JPEG_FILEPATH, "wb") as file:
file.write(res.content)
# ###########################
# EXPERIMENT: Convert JPEG to TIF with JPEG compression + settings.
# GOAL: TIF's array is same as original JPEG.
# ###########################
# Create a JPEG-in-TIF
TIF_0_FILEPATH = "sample_tif_0.tif"
os.system(f"gdal_translate -co COMPRESS=JPEG {SAMPLE_JPEG_FILEPATH} {TIF_0_FILEPATH}")
# JPEG-in-TIF BUT specify block size explicitly as the size of the image.
TIF_BS_FILEPATH = "sample_tif_bs.tif"
xs, ys = get_image_size(SAMPLE_JPEG_FILEPATH)
print(f"Size of image: {xs}x{ys}")
os.system(f"gdal_translate -co COMPRESS=JPEG -co BLOCKXSIZE={xs} -co BLOCKYSIZE={ys} {SAMPLE_JPEG_FILEPATH} {TIF_BS_FILEPATH}")
# Read and compare resulting arrays
arr = read_image_gdal(SAMPLE_JPEG_FILEPATH)
arr_0 = read_image_gdal(TIF_0_FILEPATH)
arr_bs = read_image_gdal(TIF_BS_FILEPATH)
print("Share Pixels different")
print(" JPEG-in-TIF: ", (arr != arr_0).mean())
print(" Explicit block size: ", (arr != arr_bs).mean())
Output is:
Size of image: 400x245
Share Pixels different
JPEG-in-TIF: 0.051727891156462584
Explicit block size: 0.0
The two additional SAMPLE_JPEG_URLs try to suss out whether the SOURCE_COLOR_SPACE matters. (An alternative explanation is that example #2 is a small image and that this affects the results.)

Why can't we convert flat columns of awkward1 arrays `to_parquet`?

A follow up from this question; Best way to save a dict of awkward1 arrays?
To save multiple columns of nested awkward1 arrays (with varying length);
import awkward1 as ak
dog = ak.from_iter([[1, 2], [5]])
cat = ak.from_iter([[4]])
pets = ak.zip({"dog": dog[np.newaxis], "cat": cat[np.newaxis]}, depth_limit=1)
ak.to_parquet(pets, "pets.parquet")
Unfortunately, this doesn't seem to work for flat lists;
import awkward1 as ak
dog = ak.from_iter([1, 2, 5])
cat = ak.from_iter([4])
pets = ak.zip({"dog": dog[np.newaxis], "cat": cat[np.newaxis]}, depth_limit=1)
ak.to_parquet(pets, "pets.parquet")
creates the error;
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-31-7f3a7fefb261> in <module>
3 cat = ak.from_iter([3])
4 pets = ak.zip({"dog": dog[np.newaxis], "cat": cat[np.newaxis]}, depth_limit=1)
----> 5 ak.to_parquet(pets, "pets.parquet")
~/Programs/anaconda3/envs/tree/lib/python3.7/site-packages/awkward/operations/convert.py in to_parquet(array, where, explode_records, list_to32, string_to32, bytestring_to32, **options)
2983 layout = to_layout(array, allow_record=False, allow_other=False)
2984 iterator = batch_iterator(layout)
-> 2985 first = next(iterator)
2986
2987 if "schema" not in options:
~/Programs/anaconda3/envs/tree/lib/python3.7/site-packages/awkward/operations/convert.py in batch_iterator(layout)
2978 )
2979 yield pyarrow.RecordBatch.from_arrays(
-> 2980 pa_arrays, schema=pyarrow.schema(pa_fields)
2981 )
2982
~/Programs/anaconda3/envs/tree/lib/python3.7/site-packages/pyarrow/table.pxi in pyarrow.lib.RecordBatch.from_arrays()
TypeError: object of type 'pyarrow.lib.Tensor' has no len()
What is the reason for encountering this error?
What you found is a bug, and now it is fixed: https://github.com/scikit-hep/awkward-1.0/pull/799
What's happening here is that pyarrow can't write pyarrow.lib.Tensor (regular-length lists, such as the one you created with np.newaxis) to Parquet files. Parquet files don't have a concept of "regular-length list," so that makes sense. But rather than converting it, pyarrow hits an unhandled case, in which it fails to find the length of that pyarrow.lib.Tensor. (It's a little odd that pyarrow.lib.Tensor doesn't have a __len__ method, but that's another thing.)
Anyway, with version 1.2.0 of Awkward Array, we'll simply convert regular-length lists into (in principle) variable-length lists when writing to Parquet, since the format doesn't have that type. According to the schedule, version 1.2.0 will be released tomorrow. (This bug-fix is likely the last prerelease.)

julia implement convert for struct containing NTuple

I'm trying to implement a convert for struct containing NTuple:
import Base: convert
abstract type AbstractMyType{N, T} end
struct MyType1{N, T} <: AbstractMyType{N, T}
data::NTuple{T, N}
end
struct MyType2{N, T} <: AbstractMyType{N, T}
data::NTuple{T, N}
end
foo(::Type{MyType2}, x::AbstractMyType{N, T}) where {N, T} = x
convert(::Type{MyType2}, x::AbstractMyType{N, T}) where {N, T} = MyType2{T}(x.data)
println(foo(MyType2, MyType1((1,2,3)))) # MyType1{Int64,3}((1, 2, 3))
println(convert(MyType2, MyType1((1,2,3)))) # MethodError
Defined functions foo and convert have the same signature. For some reason function foo returns normally while convert throws MethodError. Why Julia cannot find my convert method?
julia version 1.4.1
Julia is finding your convert method:
julia> println(convert(MyType2, MyType1((1,2,3)))) # MethodError
ERROR: MethodError: no method matching MyType2{3,T} where T(::Tuple{Int64,Int64,Int64})
Stacktrace:
[1] convert(::Type{MyType2}, ::MyType1{Int64,3}) at ./REPL[16]:1
[2] top-level scope at REPL[18]:1
That stack trace is saying that it's inside your convert function (in my case, I defined it on the first line of the 16th REPL prompt). The problem is that it cannot find a MyType2{T}(::Tuple) constructor.
Julia automatically creates a number of constructors for you when you don't use an inner constructor; in this case you can either call MyType(()) or MyType{T, N}(()), but Julia doesn't know what to do with only one type parameter passed (by default):
julia> MyType2((1,2,3))
MyType2{Int64,3}((1, 2, 3))
julia> MyType2{Int, 3}((1,2,3))
MyType2{Int64,3}((1, 2, 3))
julia> MyType2{Int}((1,2,3))
ERROR: MethodError: no method matching MyType2{Int64,T} where T(::Tuple{Int64,Int64,Int64})
Stacktrace:
[1] top-level scope at REPL[7]:1
[2] eval(::Module, ::Any) at ./boot.jl:331
[3] eval_user_input(::Any, ::REPL.REPLBackend) at /Users/mbauman/Julia/release-1.4/usr/share/julia/stdlib/v1.4/REPL/src/REPL.jl:86
[4] run_backend(::REPL.REPLBackend) at /Users/mbauman/.julia/packages/Revise/AMRie/src/Revise.jl:1023
[5] top-level scope at none:0
So the fix is either to define that method yourself, or change the body of your convert method to call MyType{T,N} explicitly.
Just define the method
convert(::Type{MyType2}, x::AbstractMyType{N, T}) where {N, T} = MyType2(x.data)
Testing:
julia> convert(MyType2, MyType1((1,2,3)))
MyType2{Int64,3}((1, 2, 3))

how do i solve? subscripted assignment dimension mismatch error

I am new to this forum. Let me get started: I work on MATLAB and keep getting errors all the time. Finally i found a good forum like yours. My problem is this: I have an image which I want to put inside a large matrix. Everytime I do it I get
??? ERROR: subscripted assignment dimension mismatch
I tried everything possible, like u say resize, repmat, reshape....but I could not guess what is going wrong.
My code is like this:
nem(:,:,1) = image %// <-- error subscripted assignment dimension mismatch
my size of image is
71 * 71
bytes :----40328
class :----double
nem is created by
nem = zeros([size(inputimage,1),size(inputimage,2),12]);
size of inputmage is
[m,n,o] = size(inputimage);
m = 584 n = 565 o = 1
and size of nem:
[m,n,o] = size(img_out);
m = 584 n = 565 o = 12
You are trying to "fit" image a 71-by-71 matrix into mem(:,:,1) which is 584-by-565 matrix.
How do you expect Matlab to do this type of assignment??
You can fit image into a part of mem
>> mem( 1:size(image,1), 1:size(image,2), 1 ) = image

Resources