Low framerate when running Piston example - performance

I'm building a simple 2D game in Rust using Piston. I used examples from the Piston documentation and expanded it and it works quite well. However, I get pretty bad performance:
Drawing only 2 squares gives me a framerate of about 30-40 FPS
Drawing 5 000 squares gives me a framerate of about 5 FPS
This is on a Core i7 # 2.2GHz running Windows 10. Rust version 1.8, Piston version 0.19.0.
Is this expected or have I made any mistakes in my code? Am I even measuring the FPS correctly?
extern crate piston_window;
extern crate piston;
extern crate rand;
use piston_window::*;
use rand::Rng;
fn main() {
const SIZE: [u32; 2] = [600,600];
const GREEN: [f32; 4] = [0.0, 1.0, 0.0, 1.0];
const NUM: u32 = 1000; //change this to change number of polygons
const SQUARESIZE: f64 = 10.0;
// Create an Glutin window.
let window: PistonWindow = WindowSettings::new("test",SIZE)
.exit_on_esc(true)
.build()
.unwrap();
let mut frames = 0;
let mut passed = 0.0;
let mut rng = rand::thread_rng();
for e in window {
if let Some(_) = e.render_args() {
e.draw_2d(|c, g| {
//clear the screen.
clear(GREEN, g);
for i in 0..NUM {
//setting up so that it looks pretty
let x = (i % SIZE[0]) as f64;
let y = (i % SIZE[1]) as f64;
let fill = (x / (SIZE[0] as f64)) as f32;
let color: [f32; 4] = [fill,1.0-fill,fill,fill];
let x = rng.gen_range::<f64>(0.0,SIZE[0] as f64);
//draw the square
let square = rectangle::square(0.0, 0.0, SQUARESIZE);
let transform = c.transform.trans(x-SQUARESIZE/2.0,y-SQUARESIZE/2.0);
rectangle(color, square, transform, g);
}
frames+=1;
});
}
if let Some(u) = e.update_args() {
passed += u.dt;
if passed > 1.0 {
let fps = (frames as f64) / passed;
println!("FPS: {}",fps);
frames = 0;
passed = 0.0;
}
}
}
}
Thank you for your help.
EDIT: taskmgr tells me that it only uses about 17K memory, but one of my physical CPU cores maxes out when the FPS drops below about 20.
EDIT 2: Changed the code to a complete working example.

Related

Dithered image larger than original (using the Rust image crate)

I'm learning Rust and wanted to try my hand at error diffusion dithering. I've got it working, but the dithered file ends up bigger than the original, which is the opposite of what's supposed to happen. The original JPEG image is 605 KB big, but the dithered image has a whopping 2.57 MB. My knowledge of the image crate is very limited and I found all the various structs for representing images confusing, so I must be missing something regarding the API.
Here's the code for dithering the image (included are only parts which I deemed relevant):
impl DiffusionKernel<'_> {
pub const FLOYD_STEINBERG: DiffusionKernel<'_> = // Constructor
fn distribute_error(
&self,
error: &(i16, i16, i16),
image: &mut DynamicImage,
width: u32,
height: u32,
x: u32,
y: u32,
) {
for target in self.targets /* "targets" are the pixels where to distribute the error */ {
// Checks if the target x and y are in the bounds of the image
// Also returns the x and y coordinates of the pixel, because the "target" struct only describes the offset of the target pixel from the pixel being currently processed
let (is_valid_target, target_x, target_y) =
DiffusionKernel::is_valid_target(target, width, height, x, y);
if is_valid_target == false {
continue;
}
let target_pix = image.get_pixel(target_x, target_y);
// Distribute the error to the target_pix
let new_pix = Rgba::from([new_r, new_g, new_b, 255]);
image.put_pixel(target_x, target_y, new_pix);
}
}
pub fn diffuse(&self, bit_depth: u8, image: &mut DynamicImage) {
let width = image.width();
let height = image.height();
for x in 0..width {
for y in 0..height {
let pix = image.get_pixel(x, y);
let pix_quantized = ColorUtil::reduce_color_bit_depth(pix, bit_depth); // Quantizes the color
let error = (
pix.0[0] as i16 - pix_quantized.0[0] as i16,
pix.0[1] as i16 - pix_quantized.0[1] as i16,
pix.0[2] as i16 - pix_quantized.0[2] as i16,
);
image.put_pixel(x, y, pix_quantized);
self.distribute_error(&error, image, width, height, x, y);
}
}
// Distributing the error ends up creating colors like 7, 7, 7, or 12, 12, 12 instead of 0, 0, 0 for black,
// so here I'm just ensuring that the colors are correctly quantized.
// I think the algorithm shouldn't behave like this, I'll try to fix it later.
for x in 0..width {
for y in 0..height {
let pix = image.get_pixel(x, y);
let pix_quantized = ColorUtil::reduce_color_bit_depth(pix, bit_depth);
image.put_pixel(x, y, pix_quantized);
}
}
}
}
Here's the code for loading and saving the image:
let format = "jpg";
let path = String::from("C:\\...\\Cat.".to_owned() + format);
let trimmed_path = path.trim(); // This needs to be here if I'm getting the path from the console
let bfr = Reader::open(trimmed_path)
.unwrap()
.with_guessed_format()
.unwrap();
let mut dynamic = bfr.decode().unwrap();
// dynamic = dynamic.grayscale();
error_diffusion::DiffusionKernel::FLOYD_STEINBERG.diffuse(1, &mut dynamic);
dynamic
.save(trimmed_path.to_owned() + "_dithered." + format)
.expect("There was an error saving the image.");
Ok, so I got back to trying to figure this out, and it looks like you just need to use an image encoder like the PngEncoder and a file to write to in order to lower the bit depth of an image. The encoder takes bytes, not pixels, but thankfully, images have a as_bytes method which returns what you need.
Here's the code:
let img = image::open(path).expect("Failed to open image.");
let (width, height) = img.dimensions();
let writer = File::create(path.to_owned() + "_out.png").unwrap();
// This is the best encoder configuration for black/white images, which is my output
// Grayscale with multiple colors -> black/white using dithering
let encoder = PngEncoder::new_with_quality(writer, CompressionType::Best, FilterType::NoFilter);
encoder
.write_image(img.as_bytes(), width, height, ColorType::L8)
.expect("Failed to write image.");

Rust-SDL2: Is there a way to draw a singular pixel to screen?

I'm making a project in Rust-SDL2, and I'm not exactly sure how to draw an individual pixel to the window. I tried looking into sdl2::rect::Point, but the documentation is rather confusing for me.
It seems the documentation for the sdl2 crate is written for someone who already knows how sdl works and just wants to use it in Rust. You may want to run through a few tutorials first…
Then again, just string-searching for point led me to the right function.
Quick example with some flickering points:
use rand::Rng;
use sdl2::event::Event;
use sdl2::keyboard::Keycode;
use sdl2::pixels::Color;
use sdl2::rect::Point;
use std::time::Duration;
pub fn main() {
let sdl_context = sdl2::init().unwrap();
let video_subsystem = sdl_context.video().unwrap();
let window = video_subsystem
.window("rust-sdl2 demo", 800, 600)
.position_centered()
.build()
.unwrap();
let mut canvas = window.into_canvas().build().unwrap();
let mut event_pump = sdl_context.event_pump().unwrap();
let mut i = 0;
let mut rng = rand::thread_rng();
'running: loop {
for event in event_pump.poll_iter() {
match event {
Event::Quit { .. }
| Event::KeyDown {
..
} => break 'running,
_ => {}
}
}
canvas.set_draw_color(Color::RGB(0, 0, 0));
canvas.clear();
i = (i + 1) % 255;
canvas.set_draw_color(Color::RGB(i, 64, 255 - i));
let (w, h) = canvas.output_size().unwrap();
let mut points = [Point::new(0, 0); 256];
points.fill_with(|| Point::new(rng.gen_range(0..w as i32), rng.gen_range(0..h as i32)));
// For performance, it's probably better to draw a whole bunch of points at once
canvas.draw_points(points.as_slice()).unwrap();
canvas.present();
::std::thread::sleep(Duration::new(0, 1_000_000_000u32 / 60)); // sloppy FPS limit
}
}

Convert RAW-image in Bayer-encoding (RGGB) and 16 bit depth to RGB: Final image is too dark and has a greenish cast (Rust)

I use a Rust library to parse raw ARW images (Sony Raw Format). I get a raw buffer of 16 bit pixels, it gives me the CFA (Color Filter Array) (which is RGGB), and the data buffer contains height * width pixels in bayer encoding. Each pixel is stored as 16 bit (however, I think the camera only uses 12 or 14 of the 16 bits for each pixel).
I'm using a Bayer library for the demosaicing process. Currently, my final image is too dark and has a greenish cast after the demosaic process. I guess the error is that before I pass the data to the bayer library, I try to transform each 16 bit value to 8 bit by dividing it by u16::max and multiplying it with u8::max. However, I don't know if this is the right approach.
I guess I need to perform additional steps between the parsing of the raw file and passing it to the bayer library. May I have any advice, please?
I can ensure that at least some demosaicing works. Here's a screenshot of the resulting image:
Current Code
The libraries I'm using are rawloader and bayer
let decoded_raw = rawloader::decode_file(path).unwrap();
let decoded_image_u16 = match &decoded_raw.data {
RawImageData::Integer(data) => data,
RawImageData::Float(_) => panic!("not supported yet"),
};
// u16 to u8 (this is probably wrong)
let mut decoded_image_u8 = decoded_image_u16
.iter()
.map(|val| {
// todo find out how to interpret the u16!
let val_f32 = *val as f32;
let u16_max_f32 = u16::MAX as f32;
let u8_max_f32 = u8::MAX as f32;
(val_f32 / u16_max_f32 * u8_max_f32) as u8
})
.collect::<Vec<u8>>();
// prepare final RGB buffer
let bytes_per_pixel = 3; // RGB
let mut demosaic_buf = vec![0; bytes_per_pixel * decoded_raw.width * decoded_raw.height];
let mut dst = bayer::RasterMut::new(
decoded_raw.width,
decoded_raw.height,
bayer::RasterDepth::Depth8,
&mut demosaic_buf,
);
// DEMOSAIC
// adapter so that `bayer::run_demosaic` can read from the Vec
let mut decoded_image_u8 = ReadableByteSlice::new(decoded_image_u8.as_slice());
bayer::run_demosaic(
&mut decoded_image_u8,
bayer::BayerDepth::Depth8,
// RGGB is definitely right for my AWR file
bayer::CFA::RGGB,
bayer::Demosaic::Linear,
&mut dst,
)
.unwrap();
I'm not sure if this is connected to the actual problem, but your conversion is way overkill.
To convert from the full range of a u16 to the full range of a u8, use:
(x >> 8) as u8
fn main() {
let convert = |x: u16| (x >> 8) as u8;
println!("{} -> {}", 0, convert(0));
println!("{} -> {}", 30000, convert(30000));
println!("{} -> {}", u16::MAX, convert(u16::MAX));
}
0 -> 0
30000 -> 117
65535 -> 255
I might be able to help you further if you post the input image, but without being able to reproduce your problem I don't think there will be much else here.

pixijs very slow in mobile compared to css

I'm testing PIXIjs for a simple 2D graphics, basically I'm sliding tiles with some background color and borders animation, plus I'm masking some parts of the layout.
While it works great in desktops it's really slower than the same slide+animations made with pure css in mobile devices (where by the way I'm using crosswalk+cordova so the browser is always the same)
For moving tiles and animating color I'm calling requestAnimationFrame for each tile and I've disabled PIXI's ticker:
ticker.autoStart = false;
ticker.stop();
This slowness could be due to a weaker GPU on mobiles? or is just about the way I use PIXI?
I'm not showing the full code because is quite long ~ 800 lines.
The following is the routine I use for each tile once a slide is captured:
const animateTileBorderAndText = (tileObj, steps, _color, radius, textSize, strokeThickness, _config) => {
let pixiTile = tileObj.tile;
let s = 0;
let graphicsData = pixiTile.graphicsData[0];
let shape = graphicsData.shape;
let textStyle = pixiTile.children[0].style;
let textInc = (textSize - textStyle.fontSize) / steps;
let strokeInc = (strokeThickness - textStyle.strokeThickness) / steps;
let prevColor = graphicsData.fillColor;
let color = _color !== null ? _color : prevColor;
let alpha = pixiTile.alpha;
let h = shape.height;
let w = shape.width;
let rad = shape.radius;
let radiusInc = (radius - rad) / steps;
let r = (prevColor & 0xFF0000) >> 16;
let g = (prevColor & 0x00FF00) >> 8;
let b = prevColor & 0x0000FF;
let rc = (color & 0xFF0000) >> 16;
let rg = (color & 0x00FF00) >> 8;
let rb = color & 0x0000FF;
let redStep = (rc - r) / steps;
let greenStep = (rg - g) / steps;
let blueStep = (rb - b) / steps;
let paintColor = prevColor;
let goPaint = color !== prevColor;
let animate = (t) => {
if (s === steps) {
textStyle.fontSize = textSize;
textStyle.strokeThickness = strokeThickness;
//pixiTile.tint = color;
if (!_config.SEMAPHORES.slide) {
_config.SEMAPHORES.slide = true;
PUBSUB.publish(_config.SLIDE_CODE, _config.torusModel.getData());
}
return true;
}
if (goPaint) {
r += redStep;
g += greenStep;
b += blueStep;
paintColor = (r << 16) + (g << 8) + b;
}
textStyle.fontSize += textInc;
textStyle.strokeThickness += strokeInc;
pixiTile.clear()
pixiTile.beginFill(paintColor, alpha)
pixiTile.drawRoundedRect(0, 0, h, w, rad + radiusInc * (s + 1))
pixiTile.endFill();
s++;
return requestAnimationFrame(animate);
};
return animate();
};
the above function is called after the following one, which is called for each tile to make it slide.
const slideSingleTile = (tileObj, delta, axe, conf, SEM, tilesMap) => {
let tile = tileObj.tile;
let steps = conf.animationSteps;
SEM.slide = false;
let s = 0;
let stepDelta = delta / steps;
let endPos = tile[axe] + delta;
let slide = (time) => {
if (s === steps) {
tile[axe] = endPos;
tileObj.resetPosition();
tilesMap[tileObj.row][tileObj.col] = tileObj;
return tileObj.onSlideEnd(axe == 'x' ? 0 : 2);
}
tile[axe] += stepDelta;
s++;
return requestAnimationFrame(slide);
};
return slide();
};
For each finger gesture a single column/row (of NxM matrix of tiles) is slided and animated using the above two functions.
It's the first time I use canvas.
I red that canvas is way faster then DOM animations and I red very good review of PIXIjs, so I believe I'm doing something wrong.
Can someone help?
In the end I'm a complete donk...
The issue is not with pixijs.
Basically I was forcing 60fps! The number of steps to complete the animation is set to 12 that implies 200ms animation at 60FPS (using requestAnimationFrame) but in low end devices its going to be obviously slower.
Css animation works with timing as parameter so it auto adapt FPS to devices hardware.
To solve the issue I'm adapting the number of steps during animations, basically if animations takes longer than 200ms I just reduce number of steps proportionally.
I hope this could be of help for each web developer used to css animation who have just started developing canvas.

Getting Pixel Color from an Image using CGPoint in Swift 3

I am try this PixelExtractor class in Swift 3, get a error;
Cannot invoke initializer for type 'UnsafePointer' with an argument list of type '(UnsafeMutableRawPointer?)'
class PixelExtractor: NSObject {
let image: CGImage
let context: CGContextRef?
var width: Int {
get {
return CGImageGetWidth(image)
}
}
var height: Int {
get {
return CGImageGetHeight(image)
}
}
init(img: CGImage) {
image = img
context = PixelExtractor.createBitmapContext(img)
}
class func createBitmapContext(img: CGImage) -> CGContextRef {
// Get image width, height
let pixelsWide = CGImageGetWidth(img)
let pixelsHigh = CGImageGetHeight(img)
let bitmapBytesPerRow = pixelsWide * 4
let bitmapByteCount = bitmapBytesPerRow * Int(pixelsHigh)
// Use the generic RGB color space.
let colorSpace = CGColorSpaceCreateDeviceRGB()
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
let bitmapData = malloc(bitmapByteCount)
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.PremultipliedFirst.rawValue)
let size = CGSizeMake(CGFloat(pixelsWide), CGFloat(pixelsHigh))
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
// create bitmap
let context = CGBitmapContextCreate(bitmapData, pixelsWide, pixelsHigh, 8,
bitmapBytesPerRow, colorSpace, bitmapInfo.rawValue)
// draw the image onto the context
let rect = CGRect(x: 0, y: 0, width: pixelsWide, height: pixelsHigh)
CGContextDrawImage(context, rect, img)
return context!
}
func colorAt(x x: Int, y: Int)->UIColor {
assert(0<=x && x<width)
assert(0<=y && y<height)
let uncastedData = CGBitmapContextGetData(context)
let data = UnsafePointer<UInt8>(uncastedData)
let offset = 4 * (y * width + x)
let alpha: UInt8 = data[offset]
let red: UInt8 = data[offset+1]
let green: UInt8 = data[offset+2]
let blue: UInt8 = data[offset+3]
let color = UIColor(red: CGFloat(red)/255.0, green: CGFloat(green)/255.0, blue: CGFloat(blue)/255.0, alpha: CGFloat(alpha)/255.0)
return color
}
}
Fix this error.
let data = UnsafePointer<UInt8>(uncastedData)
->
let data = UnsafeRawPointer(uncastedData)
Get other error; 'Type 'UnsafeRawPointer?' has no subscript members'
How to modify this error?
You can write something like this when you have an UnsafeRawPointer in your data:
let alpha = data.load(fromByteOffset: offset, as: UInt8.self)
let red = data.load(fromByteOffset: offset+1, as: UInt8.self)
let green = data.load(fromByteOffset: offset+2, as: UInt8.self)
let blue = data.load(fromByteOffset: offset+3, as: UInt8.self)
Or else, you can get UnsafeMutablePointer<UInt8> from your uncastedData (assuming it's an UnsafeMutableRawPointer):
let data = uncastedData.assumingMemoryBound(to: UInt8.self)
SWIFT 3 (updated March 2017) Xcode 8 / IOS 10
Important: note that return value corresponds to red: b, green:r and blue: r as in the data they are stored backwards
First, create the extension (you can copy&paste somewhere in your
code)
extension UIImage {
func getPixelColor(pos: CGPoint) -> UIColor {
if let pixelData = self.cgImage?.dataProvider?.data {
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(self.size.width) * Int(pos.y)) + Int(pos.x)) * 4
let r = CGFloat(data[pixelInfo+0]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: b, green: g, blue: r, alpha: a)
} else {
//IF something is wrong I returned WHITE, but change as needed
return UIColor.white
}
}
}
Then just call it as:
let colorAtPixel : UIColor = (theView.image?.getPixelColor(pos: CGPoint(x: 2, y: 2)))!
Although the code returns de exact color, it seems that is not returning the correct one for different CGPoints.
Might it be because of the screen resolution? (x1,x2,x3)?
It would be great if someone can add some light to the mystery...
Swift-3 (IOS 10.3)
extension UIImage {
func getPixelColor(atLocation location: CGPoint, withFrameSize size: CGSize) -> UIColor {
let x: CGFloat = (self.size.width) * location.x / size.width
let y: CGFloat = (self.size.height) * location.y / size.height
let pixelPoint: CGPoint = CGPoint(x: x, y: y)
let pixelData = self.cgImage!.dataProvider!.data
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelIndex: Int = ((Int(self.size.width) * Int(pixelPoint.y)) + Int(pixelPoint.x)) * 4
let r = CGFloat(data[pixelIndex]) / CGFloat(255.0)
let g = CGFloat(data[pixelIndex+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelIndex+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelIndex+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
}
Usage : -
let color = yourImageView.image!.getPixelColor(atLocation: location, withFrameSize: yourImageView.frame.size)
location is a CGPoint
and size is size of your imageView
The following section is taken from some Swift 3 code I'm using to sample pixels from an image to get the predominant hue which I use to generate a background for tableView rows. The mechanics for the hue selection process don't apply to your question, so I'm just providing the relevant fragment.
let colorSpace = CGColorSpaceCreateDeviceRGB() // UIExtendedSRGBColorSpace
let newImage = image.cgImage?.copy(colorSpace: colorSpace)
let pixelData = newImage?.dataProvider!.data
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
var hueFrequency = [Int: Double]()
hueFrequency[1] = 1 // Add one entry so this serves as a default if no hues from the image pass the filters
let nStart = 1
let mStart = 1
for n in nStart...Int(image.size.width / samplingFactor) {
for m in mStart...Int(image.size.height / samplingFactor) {
let pixelInfo: Int = ((Int(image.size.width) * m * Int(samplingFactor)) + n * Int(samplingFactor)) * 4 // bytesPerPixel
let b = CGFloat(data[pixelInfo]) / CGFloat(255.0) // cgImage bitmapinfo = rawValue 8194 -> BGRA ordering
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let r = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
Also, note that I found the bitmapInfo value (image.cgImage!.bitmapInfo using my parameters) indicated a reordering of the RGBA sequence to BGRA, which I had to deal with in ordering the steps to pick out the data. If your colors are off, you may want to check this.

Resources