Cargo.toml
image = "0.23.12"
fltk = "0.10.14"
I'd like to save a rgb data as a jpeg file using rust's image crate:
use image::{RgbImage};
use image::ImageBuffer;
use fltk::{image::RgbImage, button::*};
let sourceImg = image::open("imgs/2.jpg").unwrap();
let rgb = RgbImage::new(&sourceImg.to_bytes(), x, y, 3).unwrap();
let mut prevWindowImgButton = Button::new(0,0, 300, 300, "");
prevWindowImgButton.set_image(Some(&rgb));
let rgbData= &prevWindowImgButton.image().unwrap().to_rgb_data();
//returns rgb data with type &Vec<u8>
rgbData.save("out/outputtest.jpg");
Gives error:
testRGB.save("out/outputtest.jpg");
| ^^^^ method not found in `&Vec<u8>`
Because .save must be used on an ImageBuffer. So how do I convert this rgb data to an ImageBuffer?
If all you want is to save a raw buffer to image file using the image crate you can use the save_buffer_with_format func:
use image::io::Reader;
use image::save_buffer_with_format;
fn main() {
// The following three lines simply load a test image and convert it into buffer
let img = Reader::open("myimage.png").unwrap().decode().unwrap().to_rgb8();
let (width, height) = (img.width(), img.height());
let img_byte_vec = img.into_raw();
// The next line is what you want
save_buffer_with_format("myimg.jpg", &img_byte_vec, width, height, image::ColorType::Rgb8, image::ImageFormat::Jpeg).unwrap();
}
Related
In C++ I am used to using stb_image to load image data to and from RAM.
I am writing a rust program where I have loaded some PNGs and JPEGs as raw binary data.
I am trying to use the image crate to read and decompress the ray byte data into the data and metadata (i.e. image dimensions and raw pixel byte data). I need to then re compress the data s a png and print it to disk, to make sure the data is ok (I will use the raw buffers later on).
To that effect I have this
let image_data = image::load_from_memory(bytes).unwrap();
where bytes is the raw image data.
Problem n1, this seems to create a 3 channel image for jpegs, I need a 4 channel image for both pngs and jpegs. So for jpegs I need the image crate to add padding. But if I try to cast it using as_rgba8 I can no longer get the width and the height of the image.
Then I am trying to read the data into a custom struct, like this:
let mut raw_data = Vec::<u8>::new();
let width = image_data.width() as usize;
let height = image_data.height() as usize;
raw_data.extend_from_slice(image_data.as_bytes());
println!("{}", image_data.width());
println!("{}", image_data.height());
println!("{}", image_data.height() * image_data.width() * 3);
println!("{}", raw_data.len());
let texture =
Texture
{
width,
height,
channel_num : 4,
format : ImageFormat::RGBA8,
data : raw_data,
};
This part seems to work, next I am trying to re-compress the data dn print to disk:
let tmp = RgbaImage::from_raw(
texture.width as u32,
texture.height as u32,
texture.data.as_bytes().to_vec()).unwrap();
tmp.save("tmp.png");
In this case I am getting a None error on attempting the unwrap. I don't understand why since the byte buffer does have enough data to contain the full image, it was literally created by that image.
I am somewhat lost.
[...] this seems to create a 3 channel image for jpegs, I need a 4 channel image for both pngs and jpegs. [...] But if I try to cast it using as_rgba8 I can no longer get the width and the height of the image.
You want to convert the underlying image buffer, rather than cast the image. This is done with the to_* family of methods in DynamicImage. This works regardless of whether the dynamic image was obtained from a file or from memory (both open and load_from_memory return a DynamicImage).
use image::{RgbaImage, open}; // 0.24.3
let img = open("example.jpg")?;
println!(
"Before: {}x{} {:?} ({} channels)",
img.width(),
img.height(),
img.color(),
img.color().channel_count()
);
let img: RgbaImage = img.to_rgba8();
println!(
"After: {}x{} ({} channels)",
img.width(),
img.height(),
img.sample_layout().channels
);
Note how the second img is already known at compile time to be an RGBA image. As an image buffer, you can freely retrieve any property or pixel data that you wish.
Possible output:
Before: 2864x2480 Rgb8 (3 channels)
After: 2864x2480 (4 channels)
I'm using Piston's image crate, with this code:
use image::{Rgb, ImageBuffer, Pixel};
let image = Vec::<Rgb<u8>>::new();
let image_buffer = ImageBuffer::<Rgb<u8>, Vec<Rgb<u8>>>::from_vec(
width, height
image,
).unwrap();
However I get this error:
error[E0599]: no function or associated item named `from_vec` found for type `image::ImageBuffer<image::Rgb<u8>, std::vec::Vec<image::Rgb<u8>>>` in the current scope
--> src/main.rs:348:21
|
348 | let image_buffer = ImageBuffer::<Rgb<u8>, Vec<Rgb<u8>>>::from_vec(
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ function or associated item not found in `image::ImageBuffer<image::Rgb<u8>, std::vec::Vec<image::Rgb<u8>>>`
I can't work out why. It's clearly in the documentation, and the types seem right as far as I can tell.
Expanding a bit: In the example above, we have a ImageBuffer::<Rgb<u8>, Vec<Rgb<u8>>. And ImageBuffer provides two implementations of from_vec, depending on its type parameters:
impl<P, Container> ImageBuffer<P, Container>
where
P: Pixel<Subpixel = u8> + 'static,
Container: Deref<Target = [u8]>,
impl<P: Pixel + 'static> ImageBuffer<P, Vec<P::Subpixel>>
where
P::Subpixel: 'static,
Neither of these worked here because the Container parameter type in ImageBuffer<Rgb<u8>, Vec<Rgb<u8>> is a vector of Rgb<u8> values. It will dereference to a slice of [Rgb<u8>], making it incompatible with the first implementation, and the second one expects a vector of subpixel values (<P as Pixel>::Subpixel) rather than actual pixel values (Rgb<u8>). This is generally what the ImageBuffer type in this crate expects as its pixel data container.
Working example:
extern crate image;
use image::{ImageBuffer, Pixel, Rgb};
fn main() {
let width = 64;
let height = 64;
let image = vec![0x7F_u8; width as usize * height as usize * 3];
let image_buffer =
ImageBuffer::<Rgb<u8>, Vec<u8>>::from_vec(width, height, image).unwrap();
}
Playground
Ah it has to be a Vec<P::Subpixel>, i.e. Vec<u8> rather than a Vec<Rgb<u8>>. That is a little annoying.
I am loading a pixel font file as a png image. I then use it to draw each character to canvas by writing to the buffer imageData.data clamped array and then using putImageData.
Is there a simpler way to get the data array from the png image apart from loading the image as fontImage and then these 7 lines ...
let fontCanvas = document.createElement("canvas");
fontCanvas.width = fontImage.width;
fontCanvas.height = fontImage.height;
let fontContext = fontCanvas.getContext("2d");
fontContext.drawImage(fontImage, 0, 0);
let fontBank = fontContext.getImageData(0,0,fontCanvas.width,fontCanvas.height);
let fontData = fontBank.data;
Thanks.
I'd like to learn Rust and thought it would be fun to procedurally generate images. I've no idea where to start though... piston/rust-image? But even with that where should I begin?
The place to begin is the docs and the repository.
It's not immediately obvious from the landing page of the documentation, but the core type in image is ImageBuffer.
The new function allows one to construct an ImageBuffer representing an image with the given/width, storing pixels of a given type (e.g. RGB, or that with transparency). One can use methods like pixels_mut, get_pixel_mut and put_pixel (the latter are below pixels_mut in the documentation) to modify the image. E.g.
extern crate image;
use image::{ImageBuffer, Rgb};
const WIDTH: u32 = 10;
const HEIGHT: u32 = 10;
fn main() {
// a default (black) image containing Rgb values
let mut image = ImageBuffer::<Rgb<u8>>::new(WIDTH, HEIGHT);
// set a central pixel to white
image.get_pixel_mut(5, 5).data = [255, 255, 255];
// write it out to a file
image.save("output.png").unwrap();
}
which looks like:
The repo is particularly useful as a starting point, because it contains examples, in particular, it has an example of programmatically generating an image. When using a new library, I'll open the docs, and, if confused, the repo specifically to look for examples.
NOTE:
get_pixel_mut Deprecated since 0.24.0: Use get_pixel and put_pixel instead.
As #huon answer is 6 years old, I was getting errors reproducing the result, so I wrote this,
use image::{ImageBuffer, RgbImage};
const WIDTH:u32 = 10;
const HEIGHT:u32 = 10;
fn main() {
let mut image: RgbImage = ImageBuffer::new(WIDTH, HEIGHT);
*image.get_pixel_mut(5, 5) = image::Rgb([255,255,255]);
image.save("output.png").unwrap();
}
I want to crop an image of 1176*640 to save the ROI of 1176*400 size. I am using the following snippet to acheive bit I am still getting the original image as output.
IplImage *CMyFunction::ROI(IplImage *pImage, CvRect ROI)
{
IplImage *mROI = cvCreateImage(cvGetSize(*pImage), IPL_DEPTH_8U, 1);
cvSetImageROI(pImage, rect_ROI);
cvCopy(pImage, mROI);
cvResetImageROI(pImage);
return mROI;
}
For cvCopy() the source and destination should be same size and type, that is the parameter like width, height, depth, and number of channel should be equal for both image. In your case you can change your code either like
IplImage *mROI = cvCreateImage(cvGetSize(pImage), pImage->depth, pImage->nChannels); //create dest with same size as source
cvSetImageROI(pImage, rect_ROI); //Set roi on source
cvSetImageROI(mROI, rect_ROI); //set roi on dest
cvCopy(pImage, mROI);
cvResetImageROI(pImage);
cvResetImageROI(mROI);
or
IplImage *mROI = cvCreateImage(cvSize(rect_ROI.width,rect_ROI.height), pImage->depth, pImage->nChannels); // create an image of size as rect
cvSetImageROI(pImage, rect_ROI); //set roi on source
cvCopy(pImage, mROI);
cvResetImageROI(pImage);
I understood that the pointer when leaves the function is no longer stable and declared a new IplImage* outside of the function and pass it as a parameter which proved to be efficient.
IplImage *CMyFunction::ROI(IplImage *pImage, CvRect ROI, IplImage* FinalImage)