I have ndarray
let mut a = Array3::<u8>::from_elem((50, 40, 3), 3);
and I use image library
let mut imgbuf = image::ImageBuffer::new(50, 40);
How could I save my ndarray as image ?
If there is better image library then image for this I could use it.
The easiest way is to ensure that the array follows is in standard layout (C-contiguous) with the image dimensions in the order (height, width, channel) order (HWC), or in an equivalent memory layout. This is necessary because image expects rows to be contiguous in memory.
Then, build a RgbImage using the type's from_raw function.
use image::RgbImage;
use ndarray::Array3;
fn array_to_image(arr: Array3<u8>) -> RgbImage {
assert!(arr.is_standard_layout());
let (height, width, _) = arr.dim();
let raw = arr.into_raw_vec();
RgbImage::from_raw(width as u32, height as u32, raw)
.expect("container should have the right size for the image dimensions")
}
Example of use:
let mut array: Array3<u8> = Array3::zeros((200, 250, 3)); // 250x200 RGB
for ((x, y, z), v) in array.indexed_iter_mut() {
*v = match z {
0 => y as u8,
1 => x as u8,
2 => 0,
_ => unreachable!(),
};
}
let image = array_to_image(array);
image.save("out.png")?;
The output image:
Below are a few related helper functions, in case they are necessary.
Ndarrays can be converted to standard layout by calling the method as_standard_layout, available since version 0.13.0. Before this version, you would need to collect each array element into a vector and rebuild the array, like so:
fn to_standard_layout<A, D>(arr: Array<A, D>) -> Array<A, D>
where
A: Clone,
D: Dimension,
{
let v: Vec<_> = arr.iter().cloned().collect();
let dim = arr.dim();
Array::from_shape_vec(dim, v).unwrap()
}
Moreover, converting an ndarray in the layout (width, height, channel) to (height, width, channel) is also possible by swapping the first two axes and making the array C-contiguous afterwards:
fn wh_to_hw(mut arr: Array3<u8>) -> Array3<u8> {
arr.swap_axes(0, 1);
arr.as_standard_layout().to_owned()
}
Related
I'm learning Rust and wanted to try my hand at error diffusion dithering. I've got it working, but the dithered file ends up bigger than the original, which is the opposite of what's supposed to happen. The original JPEG image is 605 KB big, but the dithered image has a whopping 2.57 MB. My knowledge of the image crate is very limited and I found all the various structs for representing images confusing, so I must be missing something regarding the API.
Here's the code for dithering the image (included are only parts which I deemed relevant):
impl DiffusionKernel<'_> {
pub const FLOYD_STEINBERG: DiffusionKernel<'_> = // Constructor
fn distribute_error(
&self,
error: &(i16, i16, i16),
image: &mut DynamicImage,
width: u32,
height: u32,
x: u32,
y: u32,
) {
for target in self.targets /* "targets" are the pixels where to distribute the error */ {
// Checks if the target x and y are in the bounds of the image
// Also returns the x and y coordinates of the pixel, because the "target" struct only describes the offset of the target pixel from the pixel being currently processed
let (is_valid_target, target_x, target_y) =
DiffusionKernel::is_valid_target(target, width, height, x, y);
if is_valid_target == false {
continue;
}
let target_pix = image.get_pixel(target_x, target_y);
// Distribute the error to the target_pix
let new_pix = Rgba::from([new_r, new_g, new_b, 255]);
image.put_pixel(target_x, target_y, new_pix);
}
}
pub fn diffuse(&self, bit_depth: u8, image: &mut DynamicImage) {
let width = image.width();
let height = image.height();
for x in 0..width {
for y in 0..height {
let pix = image.get_pixel(x, y);
let pix_quantized = ColorUtil::reduce_color_bit_depth(pix, bit_depth); // Quantizes the color
let error = (
pix.0[0] as i16 - pix_quantized.0[0] as i16,
pix.0[1] as i16 - pix_quantized.0[1] as i16,
pix.0[2] as i16 - pix_quantized.0[2] as i16,
);
image.put_pixel(x, y, pix_quantized);
self.distribute_error(&error, image, width, height, x, y);
}
}
// Distributing the error ends up creating colors like 7, 7, 7, or 12, 12, 12 instead of 0, 0, 0 for black,
// so here I'm just ensuring that the colors are correctly quantized.
// I think the algorithm shouldn't behave like this, I'll try to fix it later.
for x in 0..width {
for y in 0..height {
let pix = image.get_pixel(x, y);
let pix_quantized = ColorUtil::reduce_color_bit_depth(pix, bit_depth);
image.put_pixel(x, y, pix_quantized);
}
}
}
}
Here's the code for loading and saving the image:
let format = "jpg";
let path = String::from("C:\\...\\Cat.".to_owned() + format);
let trimmed_path = path.trim(); // This needs to be here if I'm getting the path from the console
let bfr = Reader::open(trimmed_path)
.unwrap()
.with_guessed_format()
.unwrap();
let mut dynamic = bfr.decode().unwrap();
// dynamic = dynamic.grayscale();
error_diffusion::DiffusionKernel::FLOYD_STEINBERG.diffuse(1, &mut dynamic);
dynamic
.save(trimmed_path.to_owned() + "_dithered." + format)
.expect("There was an error saving the image.");
Ok, so I got back to trying to figure this out, and it looks like you just need to use an image encoder like the PngEncoder and a file to write to in order to lower the bit depth of an image. The encoder takes bytes, not pixels, but thankfully, images have a as_bytes method which returns what you need.
Here's the code:
let img = image::open(path).expect("Failed to open image.");
let (width, height) = img.dimensions();
let writer = File::create(path.to_owned() + "_out.png").unwrap();
// This is the best encoder configuration for black/white images, which is my output
// Grayscale with multiple colors -> black/white using dithering
let encoder = PngEncoder::new_with_quality(writer, CompressionType::Best, FilterType::NoFilter);
encoder
.write_image(img.as_bytes(), width, height, ColorType::L8)
.expect("Failed to write image.");
use image::{Rgb, RgbImage};
use rayon::prelude::*;
#[inline]
fn lerp(pct: f32, a: f32, b: f32) -> f32 {
pct.mul_add(b - a, a)
}
#[inline]
fn distance(x: i32, y: i32) -> f32 {
((x * x + y * y) as f32).sqrt()
}
struct ColorCalculator {
from: [f32; 3],
to: [f32; 3],
center_x: i32,
center_y: i32,
max_dist: f32,
}
impl ColorCalculator {
fn new(from: [u8; 3], to: [u8; 3], width: u32, height: u32) -> Self {
let center_x = width as i32 / 2;
let center_y = height as i32 / 2;
Self {
from: from.map(|channel| channel as f32),
to: to.map(|channel| channel as f32),
center_x,
center_y,
max_dist: distance(center_x, center_y),
}
}
fn calculate(&self, x: u32, y: u32) -> [u8; 3] {
let x_dist = self.center_x - x as i32;
let y_dist = self.center_y - y as i32;
let t = distance(x_dist, y_dist) / self.max_dist;
[
lerp(t, self.from[0], self.to[0]) as u8,
lerp(t, self.from[1], self.to[1]) as u8,
lerp(t, self.from[2], self.to[2]) as u8,
]
}
}
fn radial_gradient(geometry: [u32; 2], inner_color: [u8; 3], outer_color: [u8; 3]) -> RgbImage {
let [width, height] = geometry;
let color_calculator = ColorCalculator::new(inner_color, outer_color, width, height);
let mut background = RgbImage::new(width, height);
(0..height / 2).into_par_iter().for_each(|y| {
for x in 0..width / 2 {
let color = Rgb(color_calculator.calculate(x, y));
background.put_pixel(x, y, color);
background.put_pixel(width - x - 1, y, color);
background.put_pixel(x, height - y - 1, color);
background.put_pixel(width - x - 1, height - y - 1, color);
};
});
background
}
I know that I could just use a mutex here although it is unnecessary since provided my code is correct no pixel should be mutated more than once. So how do I tell rust that doing background.put_pixel(x, y, color) in multiple threads is actually okay here?
I'm guessing some use of unsafe has to be used here although I am new to rust and am not sure how to use it effectively here.
Here's the error
error[E0596]: cannot borrow `background` as mutable, as it is a captured variable in a `Fn` closure
--> src\lib.rs:212:13
|
212 | background.put_pixel(x, y, color);
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ cannot borrow as mutable
error[E0596]: cannot borrow `background` as mutable, as it is a captured variable in a `Fn` closure
--> src\lib.rs:213:13
|
213 | background.put_pixel(width - x - 1, y, color);
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ cannot borrow as mutable
error[E0596]: cannot borrow `background` as mutable, as it is a captured variable in a `Fn` closure
--> src\lib.rs:214:13
|
214 | background.put_pixel(x, height - y - 1, color);
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ cannot borrow as mutable
error[E0596]: cannot borrow `background` as mutable, as it is a captured variable in a `Fn` closure
--> src\lib.rs:215:13
|
215 | background.put_pixel(width - x - 1, height - y - 1, color);
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ cannot borrow as mutable
You can't. At least not with an RgbImage.
put_pixel takes a &mut self. In Rust, it's undefined behavior to have two &mut references alias - the optimizer can do some funky stuff to your code if you break this assumption.
You will probably have an easier time creating a Vec<u8> of pixel data, calculating each pixel's value using Rayon's parallel iterators (which will take special care to not alias the mutable references), then assemble the buffer into an image using from_vec.
You can't do this with RgbImage (or any ImageBuffer), but you can do it if you work on a raw Vec<u8> in pure safe code.
Essentially the idea is to use split_at_mut and par_(r)chunks_exact_mut to produce parallel iterators that start from each corner of the image.
First, we allocate a chunk of memory
fn radial_gradient(geometry: [u32; 2], inner_color: [u8; 3], outer_color: [u8; 3]) -> RgbImage {
let [width, height] = geometry;
let color_calculator = ColorCalculator::new(inner_color, outer_color, width, height);
// assertions here to hopefully help the optimizer later
assert!(width % 2 == 0);
assert!(height % 2 == 0);
// allocate memory for the image
let mut background = RgbImage::new(width, height).into_vec();
Then, we split the memory into a top and bottom half, and create iterators over each row, starting at the top and bottom
let width = width as usize;
let height = height as usize;
// split background into top and bottom,
// so we can utilize the x-axis symmetry to reduce color calculations
let (top, bottom) = background.split_at_mut(width * height * 3 / 2);
// use chunks to split each half into rows,
// so we can utilize the y-axis symmetry to reduce color calculations
let top_rows = top.par_chunks_exact_mut(width * 3);
let bottom_rows = bottom.par_rchunks_exact_mut(width * 3);
Then, we zip those iterators together, so we iterate over the top row and bottom row together, then the next row in on each side, etc. Add enumerate to get the Y coordinate and then we will flat_map so our pixel iterator later gets unwrapped into the main iterator.
// zip to iterate over top and bottom row together
// enumerate to get the Y coordinate
top_rows.zip(bottom_rows).enumerate().flat_map(|(y, (top_row, bottom_row))| {
Then, split each row at the middle so we can have four iterators, one for each corner
// split each row at the y-axis
let (tl, tr) = top_row.split_at_mut(width * 3 / 2);
let (bl, br) = bottom_row.split_at_mut(width * 3 / 2);
// iterate over pixels (chunks of 3 bytes) from the
// top left, bottom left, top right, and bottom right half-rows
let tl = tl.par_chunks_exact_mut(3);
let bl = bl.par_chunks_exact_mut(3);
let tr = tr.par_rchunks_exact_mut(3);
let br = br.par_rchunks_exact_mut(3);
Then, zip the four pixel iterators together, so we iterate from the four corners simultaneously
// zip to iterate over each set of four pixels together
// enumerate to get the X coordinate
tl.zip_eq(bl).zip_eq(
tr.zip_eq(br)
).enumerate().map(move |(x, ((tl, bl), (tr, br)))| {
// add the y coordinate to the pixel-wise iterator
((x, y), (tl, bl, tr, br))
})
Then, iterate over each set of four pixels, copying the color into each
And convert back into an RgbImage
}).for_each(|((x, y), (tl, bl, tr, br))| {
// copy the color into the four symmetric pixels
let color = color_calculator.calculate(x as u32, y as u32);
tl.copy_from_slice(&color);
bl.copy_from_slice(&color);
tr.copy_from_slice(&color);
br.copy_from_slice(&color);
});
RgbImage::from_vec(width as u32, height as u32, background).unwrap()
}
playground
It's hard to say if this will be more or less performant than the strategy of coloring one quadrant and copying it to the rest, but it's worth a try. It may also not be worth the cognitive overhead of all of the chunking wizardry going on.
Full code
fn radial_gradient(geometry: [u32; 2], inner_color: [u8; 3], outer_color: [u8; 3]) -> RgbImage {
let [width, height] = geometry;
let color_calculator = ColorCalculator::new(inner_color, outer_color, width, height);
// assertions here to hopefully help the optimizer later
assert!(width % 2 == 0);
assert!(height % 2 == 0);
// allocate memory for the image
let mut background = RgbImage::new(width, height).into_vec();
let width = width as usize;
let height = height as usize;
// split background into top and bottom,
// so we can utilize the x-axis symmetry to reduce color calculations
let (top, bottom) = background.split_at_mut(width * height * 3 / 2);
// use chunks to split each half into rows,
// so we can utilize the y-axis symmetry to reduce color calculations
let top_rows = top.par_chunks_exact_mut(width * 3);
let bottom_rows = bottom.par_rchunks_exact_mut(width * 3);
// zip to iterate over top and bottom row together
// enumerate to get the Y coordinate
top_rows.zip(bottom_rows).enumerate().flat_map(|(y, (top_row, bottom_row))| {
// split each row at the y-axis
let (tl, tr) = top_row.split_at_mut(width * 3 / 2);
let (bl, br) = bottom_row.split_at_mut(width * 3 / 2);
// iterate over pixels (chunks of 3 bytes) from the
// top left, bottom left, top right, and bottom right half-rows
let tl = tl.par_chunks_exact_mut(3);
let bl = bl.par_chunks_exact_mut(3);
let tr = tr.par_rchunks_exact_mut(3);
let br = br.par_rchunks_exact_mut(3);
// zip to iterate over each set of four pixels together
// enumerate to get the X coordinate
tl.zip_eq(bl).zip_eq(
tr.zip_eq(br)
).enumerate().map(move |(x, ((tl, bl), (tr, br)))| {
// add the y coordinate to the pixel-wise iterator
((x, y), (tl, bl, tr, br))
})
}).for_each(|((x, y), (tl, bl, tr, br))| {
// copy the color into the four symmetric pixels
let color = color_calculator.calculate(x as u32, y as u32);
tl.copy_from_slice(&color);
bl.copy_from_slice(&color);
tr.copy_from_slice(&color);
br.copy_from_slice(&color);
});
RgbImage::from_vec(width as u32, height as u32, background).unwrap()
}
I use a Rust library to parse raw ARW images (Sony Raw Format). I get a raw buffer of 16 bit pixels, it gives me the CFA (Color Filter Array) (which is RGGB), and the data buffer contains height * width pixels in bayer encoding. Each pixel is stored as 16 bit (however, I think the camera only uses 12 or 14 of the 16 bits for each pixel).
I'm using a Bayer library for the demosaicing process. Currently, my final image is too dark and has a greenish cast after the demosaic process. I guess the error is that before I pass the data to the bayer library, I try to transform each 16 bit value to 8 bit by dividing it by u16::max and multiplying it with u8::max. However, I don't know if this is the right approach.
I guess I need to perform additional steps between the parsing of the raw file and passing it to the bayer library. May I have any advice, please?
I can ensure that at least some demosaicing works. Here's a screenshot of the resulting image:
Current Code
The libraries I'm using are rawloader and bayer
let decoded_raw = rawloader::decode_file(path).unwrap();
let decoded_image_u16 = match &decoded_raw.data {
RawImageData::Integer(data) => data,
RawImageData::Float(_) => panic!("not supported yet"),
};
// u16 to u8 (this is probably wrong)
let mut decoded_image_u8 = decoded_image_u16
.iter()
.map(|val| {
// todo find out how to interpret the u16!
let val_f32 = *val as f32;
let u16_max_f32 = u16::MAX as f32;
let u8_max_f32 = u8::MAX as f32;
(val_f32 / u16_max_f32 * u8_max_f32) as u8
})
.collect::<Vec<u8>>();
// prepare final RGB buffer
let bytes_per_pixel = 3; // RGB
let mut demosaic_buf = vec![0; bytes_per_pixel * decoded_raw.width * decoded_raw.height];
let mut dst = bayer::RasterMut::new(
decoded_raw.width,
decoded_raw.height,
bayer::RasterDepth::Depth8,
&mut demosaic_buf,
);
// DEMOSAIC
// adapter so that `bayer::run_demosaic` can read from the Vec
let mut decoded_image_u8 = ReadableByteSlice::new(decoded_image_u8.as_slice());
bayer::run_demosaic(
&mut decoded_image_u8,
bayer::BayerDepth::Depth8,
// RGGB is definitely right for my AWR file
bayer::CFA::RGGB,
bayer::Demosaic::Linear,
&mut dst,
)
.unwrap();
I'm not sure if this is connected to the actual problem, but your conversion is way overkill.
To convert from the full range of a u16 to the full range of a u8, use:
(x >> 8) as u8
fn main() {
let convert = |x: u16| (x >> 8) as u8;
println!("{} -> {}", 0, convert(0));
println!("{} -> {}", 30000, convert(30000));
println!("{} -> {}", u16::MAX, convert(u16::MAX));
}
0 -> 0
30000 -> 117
65535 -> 255
I might be able to help you further if you post the input image, but without being able to reproduce your problem I don't think there will be much else here.
I try to do some basic drawing like so:
pub fn draw_rect(dc: *mut HDC__, rect: RECT, (r, g, b): (u8, u8, u8)) {
let original = unsafe { SelectObject(dc, DC_PEN as HGDIOBJ) };
let prev_color = unsafe {SetDCPenColor(dc, RGB(r,g,b))};
// some vector bs here
unsafe {
PolyDraw(dc, pts.as_ptr(), pt_instructions.as_ptr(), num_pts);
SetDCPenColor(dc, prev_color);
DeleteObject(SelectObject(dc, original));
}
}
and drawing it does - just always black. I dont have a monochrome dc - thats for sure, cuz GDITransparentBlt works ok. And i think im right on the order of things:
1. select pen as current object, saving the old one
2. change pen color, store the old one,
3. do the drawing,
4. reasign the old color,
5. load the old object
So - just as I posted it I realized that what I'm doing in the first line of the function is nonsense, as DC_PEN is just a u32 (DWORD) constant, which is by no means a GDIOBJECT. To get one from DC_PEN, i had to change it to
pub fn draw_rect(dc: *mut HDC__, rect: RECT, (r, g, b): (u8, u8, u8)) {
let original = unsafe { SelectObject(dc, GetStockObject(DC_PEN as i32)) };
let prev_color = unsafe {SetDCPenColor(dc, RGB(r,g,b))};
...
and now it works.
I have been looking for an answer to this question for several days and so on until the end, I have not completely figured it out. It is worth clarifying that I am looking for a way not just to save the screenshot as a file, but to get an array of bytes and array of bytes decoded to png/jpg or another lightweight image format.
The first way i tried was screenshot-rs crate, code like this:
let s = match get_screenshot(0) {
Ok(it) => it,
_ => return,
};
let mut buf = Vec::<u8>::new();
let encoder = PngEncoder::new(&mut buf);
unsafe {
match encoder.encode(
std::slice::from_raw_parts(s.raw_data(), s.raw_len()),
s.width() as u32,
s.height() as u32,
image::ColorType::Rgba8,
) {
Ok(it) => it,
_ => return,
};
}
(I use png encoder too)
Formally, this code works but works very slowly (around ~5 sec) and producec screenshot with switched blue and red color channels.
After this i start trying to use winapi to capture screenshot, but my knowledge about winapi, hundreds of data types and what i have to do now (this doesnt work properly)
let x1 = GetSystemMetrics(78);
let y1 = GetSystemMetrics(79);
let x2 = GetSystemMetrics(76);
let y2 = GetSystemMetrics(77);
let w = x2 - x1;
let h = y2 - y1;
let hdc_screen = GetDC(ptr::null_mut());
let hdc = CreateCompatibleDC(hdc_screen);
let h_bitmap = CreateCompatibleBitmap(hdc_screen, w, h);
let obj = SelectObject(hdc, h_bitmap as HGDIOBJ);
let status = BitBlt(hdc, 0, 0, w, h, hdc_screen, x1, y1, SRCCOPY);