Save framebuffer to image file works on desktop but not android - opengl-es

I'm trying to draw to an off-screen frame buffer, then save it to a file. This works on desktop, but on android it results in an empty image. There are no errors thrown hinting at what could be wrong. Below is the code I'm using. Anyone have an idea why this wouldn't work on android?
private void saveOffscreenImage() {
final int w = 256;
final int h = 256;
final OrthographicCamera camera = new OrthographicCamera(w, h);
camera.setToOrtho(true, w, h);
final Batch batch = new SpriteBatch();
batch.setProjectionMatrix(camera.combined);
final FrameBuffer fb = new FrameBuffer(Pixmap.Format.RGBA8888, w, h, false);
fb.begin();
batch.begin();
Gdx.gl.glClearColor(1f, 0f, 0f, 1f);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
// draw a texture
batch.draw(img, 0, 0, 128, 128);
batch.end();
final Pixmap pixmap = new Pixmap(w, h, Pixmap.Format.RGB888);
final ByteBuffer buf = pixmap.getPixels();
Gdx.gl.glReadPixels(0, 0, w, h, GL20.GL_RGB, GL20.GL_UNSIGNED_BYTE, buf);
// Prints error code: 1282
Gdx.app.log("!", "error: " + Gdx.gl.glGetError());
fb.end();
final FileHandle file = Gdx.files.absolute("path/test.png");
PixmapIO.writePNG(file, pixmap);
}

Seems like you have to read the alpha as well.
'''GL_RGB''' should be '''GL_RGBA'''
glReadPixel returns zeros and error 1282 (Android)

Related

DirectX11 Sampling 1D Texture Gives Incorrect Color

For background I am using SharpDX version 4.2.0
I am having a hard time getting the correct color in a pixel shader by sampling a 1D texture.
Here's how I am making the 1D texture, knowing that textures can be finicky if they are not multiples of DataBox.RowPitch:
public static ShaderResourceView CreateUpdateable1DTexture(SharpDX.Direct3D11.Device device, int width)
{
width = ((width / 128) * 128) + 128;
// Describe and create a Texture2D.
Texture1DDescription textureDesc = new Texture1DDescription()
{
MipLevels = 1,
Format = Format.R8G8B8A8_UNorm,
Width = width,
ArraySize = 1,
BindFlags = BindFlags.ShaderResource,
Usage = ResourceUsage.Dynamic,
CpuAccessFlags = CpuAccessFlags.Write,
};
var tex1D = new Texture1D(device, textureDesc);
var resourceView = new ShaderResourceView(device, tex1D);
tex1D.Dispose();
return resourceView;
}
Here's how I am updating the texture:
public static unsafe void Update1DTexture(SharpDX.Direct3D11.Device device, ShaderResourceView textureResource, SharpDX.Color[] colors)
{
var modifiedColorLength = ((colors.Length / 128) * 128) + 128;
// double check the bounds are fine for this texture
var texture = textureResource.ResourceAs<Texture1D>();
if (texture.Description.Width != modifiedColorLength)
throw new InvalidOperationException("Can't update this texture with this color array, texture size mismatch");
byte[] textureStreamBytes = new byte[modifiedColorLength * 4];
fixed (SharpDX.Color* colorPtr = &colors[0])
{
fixed (byte* bytePtr = &textureStreamBytes[0])
{
System.Buffer.MemoryCopy(colorPtr, bytePtr, textureStreamBytes.Length, colors.Length * 4);
}
}
DataBox databox = device.ImmediateContext.MapSubresource(texture, 0, 0, MapMode.WriteDiscard, SharpDX.Direct3D11.MapFlags.None, out DataStream stream);
if (!databox.IsEmpty)
stream.Write(textureStreamBytes, 0, textureStreamBytes.Length);
device.ImmediateContext.UnmapSubresource(textureResource.Resource, 0);
texture.Dispose();
}
The color array only has 1 color in it, it should be R:0.75, G:0.5, B:0, A:1
I verified in Visual Studio Graphics Debugger that my texture is reaching the pixel shader with that color as the first color
The problem is in my shader the reported color is R:0.874509800, G:0.749019600, B:0, A:0
Here's my sampler state description:
SamplerStateDescription samplerDesc = new SamplerStateDescription()
{
Filter = Filter.MinMagMipLinear,
AddressU = TextureAddressMode.Border,
AddressV = TextureAddressMode.Border,
AddressW = TextureAddressMode.Border,
MipLodBias = 0,
MaximumAnisotropy = 1,
ComparisonFunction = Comparison.Always,
BorderColor = new Color4(255, 255, 0, 0),
MinimumLod = 0,
MaximumLod = float.MaxValue
};
and all I am doing to sample this texture is this:
Texture2DArray shaderTextures : register(t0);
Texture1D upperLeftCoordsTexture : register(t1);
SamplerState SampleType;
float4 upperLeftCoords = upperLeftCoordsTexture.Sample(SampleType, 0);
again, upperLeftCoords reports R:0.874509800, G:0.749019600, B:0, A:0
but I am expecting R:0.75, G:0.5, B:0, A:1
Also I am sure I am passing the textures in the right order:
deviceContext.PixelShader.SetShaderResource(0, textureArray);
deviceContext.PixelShader.SetShaderResource(1, upperLeftCords);
So don't know what I am doing wrong and getting the wrong color in the pixel shader, do you happen to know what I am doing wrong here?

Image auto-cropping, issue with borders

I'm talking photo in my application. After I take it, on next screen in layout I want it be automatically cropped like in image.
But I'm constantly lose the boundaries of the photo and I have strange borders (indicated by RED in the image).
Here is my code:
Android code
private void NewElement_OnDrawBitmap(object sender, EventArgs e)
{
if (this.ViewGroup != null)
{
//get the subview
Android.Views.View subView = ViewGroup.GetChildAt(0);
int width = subView.Width;
int height = subView.Height;
//create and draw the bitmap
Bitmap b = Bitmap.CreateBitmap(width, height, Bitmap.Config.Argb8888);
Canvas c = new Canvas(b);
ViewGroup.Draw(c);
//save the bitmap to file
bytes = SaveBitmapToFile(b);
}
}
iOS code
UIGraphics.BeginImageContextWithOptions(this.Bounds.Size, true, 0);
this.Layer.RenderInContext(UIGraphics.GetCurrentContext());
var img = UIGraphics.GetImageFromCurrentImageContext();
UIGraphics.EndImageContext();
using (NSData imageData = img.AsPNG())
{
bytes = new Byte[imageData.Length];
System.Runtime.InteropServices.Marshal.Copy(imageData.Bytes, bytes, 0, Convert.ToInt32(imageData.Length));
}
private byte[] SaveBitmapToFile(Bitmap bm)
{
MemoryStream ms = new MemoryStream();
bm.Compress(Bitmap.CompressFormat.Png, 100, ms);
return ms.ToArray();
}
}
From your shared code, you're not doing any cropping, so your app just put your origin picture into the view, and the border you mentioned, is probably the background of your view.
And your project code, I saw you added
Bitmap bitmap = Bitmap.CreateBitmap(b, 0, 0, (98 * width) / 100, (82 * height) / 100);
Yes this is cropping the picture, but this just crop the image from top-left corner to 98% width and 82% height. much like
If you want to crop the picture into a square focused in center, you can simply:
int offset = (height - width) / 2;
Bitmap bitmap = Bitmap.CreateBitmap(b, 0, offset, width, height- offset);
But if you want to do more operations on Image like move/zoom... and then crop, I'd like to suggest you turn to existing solutions like FFImageLoading or Syncfusion. Or you'll have to calculate all the move/zoom data to do the crop.

How can I show only a part of an image in Processing?

I want to compare two images (same size) fr a presentation with a shifting line. On the left side of this line the one image is should be displayed while on the right side the other picture should stay visible.
This is what I tried (bitmap and ch are the images)
PImage bitmap;
PImage ch;
int framerate = 1000;
void setup() {
size(502, 316);
bitmap = loadImage("bitmap_zentriert.jpg"); // Load an image into the program
ch = loadImage("Karte_schweiz_zentriert.jpg"); // Load an image into the program
frameRate(40); //framerate
}
void draw() {
background(255);
image(ch, 10, 10); // the one image in the back
image(bitmap, 10, 10, bitmap.width, bitmap.height, 10, 10, mouseX, bitmap.height); //show part of the second image in front
rect(mouseX, 10, 1, bitmap.height-1); //make line
}
But the image "bitmap" is the whole image distorted.
How can I do that?
I'd recommend using a PGraphics buffer, which is essentially "Another sketch" that also acts as an Image for drawing purposes, and most definitely not looping at "a thousand frames per second". Only draw something when you have something new to draw, using the redraw function in combination with mouse move events:
PImage img1, img2;
PGraphics imagebuffer;
void setup() {
size(502, 316);
imagebuffer = createGraphics(width, height);
img1 = loadImage("first-image.jpg");
img2 = loadImage("second-image.jpg");
noLoop();
}
void mouseMoved() {
redraw();
}
void draw() {
image(img1, 0, 0);
if (mouseX>0) {
imagebuffer = createGraphics(mouseX, height);
imagebuffer.beginDraw();
imagebuffer.image(img2, 0, 0);
imagebuffer.endDraw();
image(imagebuffer, 0, 0);
}
}
In our setup we load the image and turn off looping because we'll be redrawing based on redraw, and then in response to mouse move events, we generate a new buffer that is only as wide as the current x-coordinate of the mouse, draw our image, which gets cropped "for free" because the buffer is only limited width, and then we draw that buffer as if it were an image on top of the image we already have.
There are many ways to do it, one thing I suggest is to create a 3rd image with the same width and height, then you load the two images pixels and insert in your 3rd image part of image1 pixels and then second part from image2, I wrote this code to test it out, works fine:
PImage img1, img2, img3;
void setup() {
size(500, 355);
img1 = loadImage("a1.png"); // Load an image into the program
img2 = loadImage("a2.png"); // Load an image into the program
img3 = createImage(width, height, RGB); //create your third image with same width and height
img1.loadPixels(); // Load the image pixels so you can access the array pixels[]
img2.loadPixels();
frameRate(40); // frame rate
}
void draw() {
background(255);
// Copy first half from first image
for(int i = 0; i < mouseX; i++)
{
for (int j = 0; j < height ; j++) {
img3.pixels[j*width+i] = img1.pixels[j*width+i];
}
}
// Copy second half from second image
for(int i = mouseX; i < width; i++)
{
for (int j = 0; j < height ; j++) {
img3.pixels[j*width+i] = img2.pixels[j*width+i];
}
}
// Update the third image pixels
img3.updatePixels();
// Simply draw that image
image(img3, 0, 0); // The one image in the back
// Draw the separation line
rect(mouseX, 0, 0, height); // Make line
}
Result :

Loading in folder outside of my sketch's data folder (processing)

Im sure there is a pretty straight forward answer to this...cant quite figure it out though.
In my processing sketch's data folder, there is a folder named test_segments. test_segments contains a bunch of images.
I need to load an image from test_segments into my PImage.
It looks like this: http://imgur.com/a/iG3B6
My code:
final int len=25;
final float thresh=170;
boolean newDesign=false;
PImage pic;
ArrayList<PImage> imgContainer;
int n=3;
void setup() {
size(800, 800, P2D);
colorMode(RGB, 255);
background(250, 250, 250);
rectMode(CENTER);
//imageMode(CENTER);
pic=loadImage("hand.jpg");
pic.resize(width, height);
color c1 = color(200,25,25);
color c2 = color(25, 255, 200);
imgContainer=new ArrayList<PImage>();
PImage pimg1=loadImage("this is where test_0.png needs to go")
pimg1.resize(50, 50);
imgContainer.add(pimg1);
noLoop();
noStroke();
}
void draw() {
if (newDesign==false) {
return;
}
pic.loadPixels();
for (int y = 0; y < height; y+=40) {
for (int x = 0; x < width; x+=40) {
int index=y*width+x;
color pixelValue = pic.pixels[index];
color rgb = pixelValue;
int r = (rgb >> 16) & 0xFF; // Faster way of getting red(argb)
int g = (rgb >> 8) & 0xFF; // Faster way of getting green(argb)
int b = rgb & 0xFF;
//How far is the current color from white
float dista=dist(r,g,b,255,255,255);
//50 is a threshold value allowing close to white being identified as white
//This value needs to be adjusted based on your actual background color
//Next block is processed only if the pixel not white
if(dista>30){
float pixelBrightness = brightness(pixelValue);
float imgPicked=constrain(pixelBrightness/thresh, 0, n-1);
image(imgContainer.get((int)imgPicked),x,y);
}
}
}
}
void mouseReleased() {
newDesign=!newDesign;
redraw();
}
Thanks!
You should just be able to do:
PImage pimg1 = loadImage("test_segments/test_0.png");
If that doesn't work, please try to post a MCVE like we talked about before. Here's an example of an MCVE that would demonstrate your problem:
PImage pimg1 = loadImage("test_segments/test_0.png");
image(pimg1, 0, 0);
Don't forget to include exactly what you expect to happen, and exactly what's happening instead. Good luck.

Rotate Bitmap Xamarin

I am loading an image from a webserver, I want to rotate it if the orientation is wrong. I've seen how to do it from a bitmap on my phone, it takes the filename but nothing with an actual bitmap. I am using this to resize it, but unsure on the rotate part.
public Bitmap resizeAndRotate(Bitmap image, int width, int height)
{
Bitmap newImage= Bitmap.createScaledBitmap(image, newWidth, newHeight, true);
return newImage;
}
You can scale and rotate the bitmap in one call by passing a Android.Graphics.Matrix that include both the scale and rotation in the transformation to Bitmap.CreateBitmap:
public Bitmap resizeAndRotate(Bitmap image, int width, int height)
{
var matrix = new Matrix();
var scaleWidth = ((float)width) / image.Width;
var scaleHeight = ((float)height) / image.Height;
matrix.PostRotate(90);
matrix.PreScale(scaleWidth, scaleHeight);
return Bitmap.CreateBitmap(image, 0, 0, image.Width, image.Height, matrix, true);
}

Resources