C# EmguCV Resize Mat but keep bounds/resolution - image

I've tried many things, but all my attempts fails.
I need to resize a Gray image (2560x1440) to lower or higher resolution, then i need to set the bounds to the original size (2560x1440) but keep the resized image on the center.
I'm using EmguCV 4.3 and Mat, i tried many aproach and use of ROI on Mat constructor and a copyTo, but nothing work, it always set new Mat with the resized bounds
Example of the required:
Source image: (2560x1440)
50% resized, but keep same bounds as source (2560x1440)
300% resized, but keep same bounds as source (2560x1440)

Use WarpAffine to apply an affine transformation to the image. Using the transformation matrix you can apply scale and translate transformation. Rotation is also supported but not covered in my example. Translation values can also be negative.
The WrapAffine method has some more parameter with which you can play around.
public void Test()
{
var img = new Mat("Bmv60.png", ImreadModes.Grayscale);
Mat upscaled = GetContentScaled(img, 2.0, 0.5, 0, 0);
upscaled.Save("scaled1.png");
Mat downscaled = GetContentScaled(img, 0.5, 0.5, 0, 0);
downscaled.Save("scaled2.png");
}
private Mat GetContentScaled(Mat src, double xScale, double yScale, double xTrans, double yTrans, Inter interpolation = Inter.Linear)
{
var dst = new Mat(src.Size, src.Depth, src.NumberOfChannels);
var translateTransform = new Matrix<double>(2, 3)
{
[0, 0] = xScale, // xScale
[1, 1] = yScale, // yScale
[0, 2] = xTrans + (src.Width - src.Width * xScale) / 2.0, //x translation + compensation of x scaling
[1, 2] = yTrans + (src.Height - src.Height * yScale) / 2.0 // y translation + compensation of y scaling
};
CvInvoke.WarpAffine(src, dst, translateTransform, dst.Size, interpolation);
return dst;
}

I feel as if there should be a more elegant way to do this, however, I offer two extension methods:
static void CopyToCenter(this Image<Gray,byte> imgScr, Image<Gray, byte> imgDst)
{
int dx = (imgScr.Cols - imgDst.Cols) / 2;
int dy = (imgScr.Rows - imgDst.Rows) / 2;
byte[,,] scrData = imgScr.Data;
byte[,,] dstData = imgDst.Data;
for(int v = 0; v < imgDst.Rows; v++)
{
for (int u = 0; u < imgDst.Cols; u++)
{
dstData[v,u, 0] = scrData[v + dy, u + dx, 0];
}
}
}
static void CopyFromCenter(this Image<Gray, byte> imgDst, Image<Gray, byte> imgScr)
{
int dx = (imgDst.Cols - imgScr.Cols) / 2;
int dy = (imgDst.Rows - imgScr.Rows) / 2;
byte[,,] scrData = imgScr.Data;
byte[,,] dstData = imgDst.Data;
for (int v = 0; v < imgScr.Rows; v++)
{
for (int u = 0; u < imgScr.Cols; u++)
{
dstData[v + dy, u + dx, 0] = scrData[v, u, 0];
}
}
}
Which can use them like this:
static void Main(string[] args)
{
double scaleFactor = 0.8;
Image<Gray, byte> orginalImage = new Image<Gray, byte>("Bmv60.png");
Image<Gray, byte> scaledImage = orginalImage.Resize(scaleFactor, Inter.Linear);
Image<Gray, byte> outputImage = new Image<Gray, byte>(orginalImage.Size);
if(scaleFactor > 1)
{
scaledImage.CopyToCenter(outputImage);
}
else
{
outputImage.CopyFromCenter(scaledImage);
}
}
You didn't request a specific language, so I hope C# is useful.

Related

Using p5.js to change the processing codes and display the shape

I want to change Processing code to p5.js. I tried to write the code of p5.js, but it cannot be displayed anything on p5.js. The original file and my code are shown below.
This is my code:
function setup(){
createCanvas(400,400);
}
var N = 100;
var cx = [0.000, 1.000, 0.500];
var cy = [0.000, 0.000, 0.866];
var x = 0.0, y = 0.0;
function draw(){
for (var i = 0; i < N; i++) {
nextPoint();
drawPoint();
}
}
function drawPoint(){
strokeWeight(1);
var px = map(x,0,1.0,0,300);
var py = map(y,0,1.0,0,300);
point(px, py);
}
function nextPoint() {
let r = random(3);
x = (x + cx[r]) / 2.0;
y = (y + cy[r]) / 2.0;
}
This is source code:(form processing)
void setup(){
size(400,400);
}
int N = 100;
float[] cx = { 0.000, 1.000, 0.500 };
float[] cy = { 0.000, 0.000, 0.866 };
float x = 0.0, y = 0.0;
void draw(){
for (int i = 0; i < N; i++) {
nextPoint();
drawPoint();
}
}
void drawPoint(){
strokeWeight(1);
float px = map(x,0,1.0,0,300);
float py = map(y,0,1.0,0,300);
point(px, py);
}
void nextPoint() {
int r = (int)random(3);
x = (x + cx[r]) / 2.0;
y = (y + cy[r]) / 2.0;
}
You need to use floor() when you define r. Although JavaScript doesn't have rigid data types for its variables like Java, it still doesn't know what you mean when you say something like cx[1.348]. When you use non-integer values for array access, you get cx[1.348] = NaN, and then when you try to draw a point at NaN, NaN, it doesn't do anything. So your code should say let r = floor(random(3)) on line 23 (you might also consider setting the background color in setup()).

Processing mirror image over x axis?

I was able to copy the image to the location but not able to mirror it. what am i missing?
PImage img;
float srcY;
float srcX;
int destX;
int destY;
img = loadImage("http://oldpalmgolfclub.com/wp-content/uploads/2012/02/Palm- Beach-State-College2-e1329949470871.jpg");
size(img.width, img.height * 2);
image(img, 0, 0);
image(img, 0, 330);
int num_pixels = img.width * img.height;
int copiedWidth = 319 - 254;
int copiedHeight = 85 - 22;
int startX = (width / 2) - (copiedWidth / 2);
int startY = (height / 2) - (copiedHeight / 2);
How about simply scaling by -1 on the x axis ?
PImage img;
img = loadImage("https://processing.org/img/processing-web.png");
size(img.width, img.height * 2);
image(img,0,0);
scale(-1,1);//flip on X axis
image(img,-img.width,img.height);//draw offset
This can be achieved by manipulating pixels as well, but needs a bit of arithmetic:
PImage img;
img = loadImage("https://processing.org/img/processing-web.png");
size(img.width, img.height * 2);
int t = millis();
PImage flipped = createImage(img.width,img.height,RGB);//create a new image with the same dimensions
for(int i = 0 ; i < flipped.pixels.length; i++){ //loop through each pixel
int srcX = i % flipped.width; //calculate source(original) x position
int dstX = flipped.width-srcX-1; //calculate destination(flipped) x position = (maximum-x-1)
int y = i / flipped.width; //calculate y coordinate
flipped.pixels[y*flipped.width+dstX] = img.pixels[i];//write the destination(x flipped) pixel based on the current pixel
}
//y*width+x is to convert from x,y to pixel array index
flipped.updatePixels()
println("done in " + (millis()-t) + "ms");
image(img,0,0);
image(flipped,0,img.height);
The above can be achieved using get() and set(), but using the pixels[] array is faster. A single for loop is generally faster than using 2 nested for loops to traverse the image with x,y counters:
PImage img;
img = loadImage("https://processing.org/img/processing-web.png");
size(img.width, img.height * 2);
int t = millis();
PImage flipped = createImage(img.width,img.height,RGB);//create a new image with the same dimensions
for(int y = 0; y < img.height; y++){
for(int x = 0; x < img.width; x++){
flipped.set(img.width-x-1,y,img.get(x,y));
}
}
println("done in " + (millis()-t) + "ms");
image(img,0,0);
image(flipped,0,img.height);
You can copy a 1px 'slice'/column in a single for loop and which is faster(but still not as fast as direct pixel manipulation):
PImage img;
img = loadImage("https://processing.org/img/processing-web.png");
size(img.width, img.height * 2);
int t = millis();
PImage flipped = createImage(img.width,img.height,RGB);//create a new image with the same dimensions
for(int x = 0 ; x < flipped.width; x++){ //loop through each columns
flipped.set(flipped.width-x-1,0,img.get(x,0,1,img.height)); //copy a column in reverse x order
}
println("done in " + (millis()-t) + "ms");
image(img,0,0);
image(flipped,0,img.height);
There are other alternatives like accessing the java BufferedImage (although this means the Processing sketch will work in Java Mode mostly) or using a PShader, but these approaches are more complex. It's generally a good idea to keep things simple (especially when getting started).

Detect and fix text skew by rotating image

Is there a way (using something like OpenCV) to detect text skew and correct it by rotating the image? Pretty much like this?
Rotating an image seems easy enough if you know the angle, but for the images I'm processing, I wont...it will need to be detected somehow.
Based on your above comment, here is the code based on the tutorial here, working fine for the above image,
Source
Rotated
Mat src=imread("text.png",0);
Mat thr,dst;
threshold(src,thr,200,255,THRESH_BINARY_INV);
imshow("thr",thr);
std::vector<cv::Point> points;
cv::Mat_<uchar>::iterator it = thr.begin<uchar>();
cv::Mat_<uchar>::iterator end = thr.end<uchar>();
for (; it != end; ++it)
if (*it)
points.push_back(it.pos());
cv::RotatedRect box = cv::minAreaRect(cv::Mat(points));
cv::Mat rot_mat = cv::getRotationMatrix2D(box.center, box.angle, 1);
//cv::Mat rotated(src.size(),src.type(),Scalar(255,255,255));
Mat rotated;
cv::warpAffine(src, rotated, rot_mat, src.size(), cv::INTER_CUBIC);
imshow("rotated",rotated);
Edit:
Also see the answer here , might be helpful.
Here's an implementation of the Projection Profile Method algorithm for skew angle estimation. Various angle points are projected into an accumulator array where the skew angle can be defined as the angle of projection within a search interval that maximizes alignment. The idea is to rotate the image at various angles and generate a histogram of pixels for each iteration. To determine the skew angle, we compare the maximum difference between peaks and using this skew angle, rotate the image to correct the skew.
Input
Result
Skew angle: -5
import cv2
import numpy as np
from scipy.ndimage import interpolation as inter
def correct_skew(image, delta=1, limit=5):
def determine_score(arr, angle):
data = inter.rotate(arr, angle, reshape=False, order=0)
histogram = np.sum(data, axis=1, dtype=float)
score = np.sum((histogram[1:] - histogram[:-1]) ** 2, dtype=float)
return histogram, score
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
scores = []
angles = np.arange(-limit, limit + delta, delta)
for angle in angles:
histogram, score = determine_score(thresh, angle)
scores.append(score)
best_angle = angles[scores.index(max(scores))]
(h, w) = image.shape[:2]
center = (w // 2, h // 2)
M = cv2.getRotationMatrix2D(center, best_angle, 1.0)
corrected = cv2.warpAffine(image, M, (w, h), flags=cv2.INTER_CUBIC, \
borderMode=cv2.BORDER_REPLICATE)
return best_angle, corrected
if __name__ == '__main__':
image = cv2.imread('1.png')
angle, corrected = correct_skew(image)
print('Skew angle:', angle)
cv2.imshow('corrected', corrected)
cv2.waitKey()
Note: You may have to adjust the delta or limit values depending on the image. The delta value controls iteration step, it will iterate up until the limit which controls the maximum angle. This method is straightforward by iteratively checking each angle + delta and currently only works to correct skew in the range of +/- 5 degrees. If you need to correct at a larger angle, adjust the limit value.
I would provide javacv for your reference.
package com.test13;
import org.opencv.core.*;
import org.opencv.imgproc.Imgproc;
import org.opencv.imgcodecs.Imgcodecs;
public class EdgeDetection {
static{ System.loadLibrary(Core.NATIVE_LIBRARY_NAME); }
public static void main( String[] args ) throws Exception{
Mat src = Imgcodecs.imread("src//data//inclined_text.jpg");
Mat src_gray = new Mat();
Imgproc.cvtColor(src, src_gray, Imgproc.COLOR_BGR2GRAY);
Imgcodecs.imwrite("src//data//inclined_text_src_gray.jpg", src_gray);
Mat output = new Mat();
Core.bitwise_not(src_gray, output);
Imgcodecs.imwrite("src//data//inclined_text_output.jpg", output);
Mat points = Mat.zeros(output.size(),output.type());
Core.findNonZero(output, points);
MatOfPoint mpoints = new MatOfPoint(points);
MatOfPoint2f points2f = new MatOfPoint2f(mpoints.toArray());
RotatedRect box = Imgproc.minAreaRect(points2f);
Mat src_squares = src.clone();
Mat rot_mat = Imgproc.getRotationMatrix2D(box.center, box.angle, 1);
Mat rotated = new Mat();
Imgproc.warpAffine(src_squares, rotated, rot_mat, src_squares.size(), Imgproc.INTER_CUBIC);
Imgcodecs.imwrite("src//data//inclined_text_squares_rotated.jpg",rotated);
}
}
private fun main(){
val bmp:Bitmap? = null //Any bitmap (if you are working with bitmap)
var mRgba = Mat() // else you can direct use MAT on onCameraFrame
val mGray = Mat()
val bmp32: Bitmap = bmp.copy(Bitmap.Config.ARGB_8888, true)
Utils.bitmapToMat(bmp32, mRgba)
Imgproc.cvtColor(mRgba, mGray, Imgproc.COLOR_BGR2GRAY)
mRgba = makeOrientationCorrection(mRgba,mGray)// here actual magic starts
Imgproc.cvtColor(mRgba, mGray, Imgproc.COLOR_BGR2GRAY)
val bmpOutX = Bitmap.createBitmap(
mRgba.cols(),
mRgba.rows(),
Bitmap.Config.ARGB_8888
)
Utils.matToBitmap(mRgba, bmpOutX)
binding.imagePreview.setImageBitmap(bmpOutX!!)
}
private fun makeOrientationCorrection(mRGBA:Mat, mGRAY:Mat):Mat{
val dst = Mat()
val cdst = Mat()
val cdstP: Mat
Imgproc.Canny(mGRAY, dst, 50.0, 200.0, 3, false)
Imgproc.cvtColor(dst, cdst, Imgproc.COLOR_GRAY2BGR)
cdstP = cdst.clone()
val linesP = Mat()
Imgproc.HoughLinesP(dst, linesP, 1.0, Math.PI/180, 50, 50.0, 10.0)
var biggestLineX1 = 0.0
var biggestLineY1 = 0.0
var biggestLineX2 = 0.0
var biggestLineY2 = 0.0
var biggestLine = 0.0
for (x in 0 until linesP.rows()) {
val l = linesP[x, 0]
Imgproc.line(
cdstP, org.opencv.core.Point(l[0], l[1]),
org.opencv.core.Point(l[2], l[3]),
Scalar(0.0, 0.0, 255.0), 3, Imgproc.LINE_AA, 0)
}
for (x in 0 until linesP.rows()) {
val l = linesP[x, 0]
val x1 = l[0]
val y1 = l[1]
val x2 = l[2]
val y2 = l[3]
val lineHeight = sqrt(((x2 - x1).pow(2.0)) + ((y2 - y1).pow(2.0)))
if(biggestLine<lineHeight){
val angleOfRotationX1 = angleOf(PointF(x1.toFloat(),y1.toFloat()),PointF(x2.toFloat(),y2.toFloat()))
Log.e("angleOfRotationX1","$angleOfRotationX1")
if(angleOfRotationX1<45.0 || angleOfRotationX1>270.0){
biggestLine = lineHeight
if(angleOfRotationX1<45.0){
biggestLineX1 = x1
biggestLineY1 = y1
biggestLineX2 = x2
biggestLineY2 = y2
}
if(angleOfRotationX1>270.0){
biggestLineX1 = x2
biggestLineY1 = y2
biggestLineX2 = x1
biggestLineY2 = y1
}
}
}
if(x==linesP.rows()-1){
Imgproc.line(
cdstP, org.opencv.core.Point(biggestLineX1, biggestLineY1),
org.opencv.core.Point(biggestLineX2, biggestLineY2),
Scalar(255.0, 0.0, 0.0), 3, Imgproc.LINE_AA, 0)
}
}
var angle = angleOf(PointF(biggestLineX1.toFloat(),biggestLineY1.toFloat()),PointF(biggestLineX2.toFloat(),biggestLineY2.toFloat()))
Log.e("angleOfRotationX2","$angle")
angle -= (angle * 2)
return deskew(mRGBA,angle)
}
fun angleOf(p1: PointF, p2: PointF): Double {
val deltaY = (p1.y - p2.y).toDouble()
val deltaX = (p2.x - p1.x).toDouble()
val result = Math.toDegrees(Math.atan2(deltaY, deltaX))
return if (result < 0) 360.0 + result else result
}
private fun deskew(src:Mat, angle:Double):Mat{
val center = org.opencv.core.Point((src.width() / 2).toDouble(), (src.height() / 2).toDouble())
val scaleBy = if(angle<0){
1.0+((0.5*angle)/45)//max scale down by 0.50(50%) based on angle
}else{
1.0-((0.3*angle)/45)//max scale down by 0.50(50%) based on angle
}
Log.e("scaleBy",""+scaleBy)
val rotImage = Imgproc.getRotationMatrix2D(center, angle, scaleBy)
val size = Size(src.width().toDouble(), src.height().toDouble())
Imgproc.warpAffine(src, src, rotImage, size, Imgproc.INTER_LINEAR + Imgproc.CV_WARP_FILL_OUTLIERS)
return src
}
Make sure you run this "makeOrientationCorrection()" method on another thread. otherwise, UI won't update for 2-5 sec.

Greyscale Image from YUV420p data

From what I have read on the internet the Y value is the luminance value and can be used to create a grey scale image. The following link: https://web.archive.org/web/20141230145627/http://bobpowell.net/grayscale.aspx, has some C# code on working out the luminance of a bitmap image :
{
Bitmap bm = new Bitmap(source.Width,source.Height);
for(int y=0;y<bm.Height;y++) public Bitmap ConvertToGrayscale(Bitmap source)
{
for(int x=0;x<bm.Width;x++)
{
Color c=source.GetPixel(x,y);
int luma = (int)(c.R*0.3 + c.G*0.59+ c.B*0.11);
bm.SetPixel(x,y,Color.FromArgb(luma,luma,luma));
}
}
return bm;
}
I have a method that returns the YUV values and have the Y data in a byte array. I have the current piece of code and it is failing on Marshal.Copy – attempted to read or write protected memory.
public Bitmap ConvertToGrayscale2(byte[] yuvData, int width, int height)
{
Bitmap bmp;
IntPtr blue = IntPtr.Zero;
int inputOffSet = 0;
long[] pixels = new long[width * height];
try
{
for (int y = 0; y < height; y++)
{
int outputOffSet = y * width;
for (int x = 0; x < width; x++)
{
int grey = yuvData[inputOffSet + x] & 0xff;
unchecked
{
pixels[outputOffSet + x] = UINT_Constant | (grey * INT_Constant);
}
}
inputOffSet += width;
}
blue = Marshal.AllocCoTaskMem(pixels.Length);
Marshal.Copy(pixels, 0, blue, pixels.Length); // fails here : Attempted to read or write protected memory
bmp = new Bitmap(width, height, width, PixelFormat.Format24bppRgb, blue);
}
catch (Exception)
{
throw;
}
finally
{
if (blue != IntPtr.Zero)
{
Marshal.FreeHGlobal(blue);
blue = IntPtr.Zero;
}
}
return bmp;
}
Any help would be appreciated?
I think you have allocated pixels.Length bytes, but are copying pixels.Length longs, which is 8 times as much memory (a long is 64 bits or 8 bytes in size).
You could try:
blue = Marshal.AllocCoTaskMem(Marshal.SizeOf(pixels[0]) * pixels.Length);
You might also need to use int[] for pixels and PixelFormat.Format32bppRgb in the Bitmap constructor (as they are both 32 bits). Using long[] gives you 64 bits per pixel which isn't what a 24 bit pixel format is expecting.
You might end up with shades of blue instead of grey though - depends on what your values of UINT_Constant and INT_Constant are.
There is no need to do "& 0xff", as yuvData[] already contains a byte.
Here are another couple of approaches you could try.
public Bitmap ConvertToGrayScale(byte[] yData, int width, int height)
{
// 3 * width bytes per scanline, rounded up to a multiple of 4 bytes
int stride = 4 * (int)Math.Ceiling(3 * width / 4.0);
byte[] pixels = new byte[stride * height];
for (int y = 0; y < height; y++)
{
for (int x = 0; x < width; x++)
{
byte grey = yData[y * width + x];
pixels[y * stride + 3 * x] = grey;
pixels[y * stride + 3 * x + 1] = grey;
pixels[y * stride + 3 * x + 2] = grey;
}
}
IntPtr pixelsPtr = Marshal.AllocCoTaskMem(pixels.Length);
try
{
Marshal.Copy(pixels, 0, pixelsPtr, pixels.Length);
Bitmap bitmap = new Bitmap(
width,
height,
stride,
PixelFormat.Format24bppRgb,
pixelsPtr);
return bitmap;
}
finally
{
Marshal.FreeHGlobal(pixelsPtr);
}
}
public Bitmap ConvertToGrayScale(byte[] yData, int width, int height)
{
// 3 * width bytes per scanline, rounded up to a multiple of 4 bytes
int stride = 4 * (int)Math.Ceiling(3 * width / 4.0);
IntPtr pixelsPtr = Marshal.AllocCoTaskMem(stride * height);
try
{
for (int y = 0; y < height; y++)
{
for (int x = 0; x < width; x++)
{
byte grey = yData[y * width + x];
Marshal.WriteByte(pixelsPtr, y * stride + 3 * x, grey);
Marshal.WriteByte(pixelsPtr, y * stride + 3 * x + 1, grey);
Marshal.WriteByte(pixelsPtr, y * stride + 3 * x + 2, grey);
}
}
Bitmap bitmap = new Bitmap(
width,
height,
stride,
PixelFormat.Format24bppRgb,
pixelsPtr);
return bitmap;
}
finally
{
Marshal.FreeHGlobal(pixelsPtr);
}
}
I get a black image with a few pixel in the top left corner if I use this code and this is stable when running :
public static Bitmap ToGrayscale(byte[] yData, int width, int height)
{
Bitmap bm = new Bitmap(width, height, PixelFormat.Format32bppRgb);
Rectangle dimension = new Rectangle(0, 0, bm.Width, bm.Height);
BitmapData picData = bm.LockBits(dimension, ImageLockMode.ReadWrite, bm.PixelFormat);
IntPtr pixelStateAddress = picData.Scan0;
int stride = 4 * (int)Math.Ceiling(3 * width / 4.0);
byte[] pixels = new byte[stride * height];
try
{
for (int y = 0; y < height; y++)
{
for (int x = 0; x < width; x++)
{
byte grey = yData[y * width + x];
pixels[y * stride + 3 * x] = grey;
pixels[y * stride + 3 * x + 1] = grey;
pixels[y * stride + 3 * x + 2] = grey;
}
}
Marshal.Copy(pixels, 0, pixelStateAddress, pixels.Length);
bm.UnlockBits(picData);
}
catch (Exception)
{
throw;
}
return bm;
}

How can I convert an Image to byte array in J2ME?

My requirement is like this. I need to read a file from the Mobile phone using a file connection, create a thumbnail of that image and post to the server. I am able to read the image using the FileConnection API, and also able to create the thumbnail.
After creating the thumbnail, I am not able find a method to convert back that image to byte[]. Is it possible?
Code for thumbnail conversion:
private Image createThumbnail(Image image) {
int sourceWidth = image.getWidth();
int sourceHeight = image.getHeight();
int thumbWidth = 128;
int thumbHeight = -1;
if (thumbHeight == -1)
thumbHeight = thumbWidth * sourceHeight / sourceWidth;
Image thumb = Image.createImage(thumbWidth, thumbHeight);
thumb.getGraphics();
Graphics g = thumb.getGraphics();
for (int y = 0; y < thumbHeight; y++) {
for (int x = 0; x < thumbWidth; x++) {
g.setClip(x, y, 1, 1);
int dx = x * sourceWidth / thumbWidth;
int dy = y * sourceHeight / thumbHeight;
g.drawImage(image, x - dx, y - dy);
}
}
Image immutableThumb = Image.createImage(thumb);
return thumb;
}
MIDP2.0's Image.getRGB() is your friend. You can obtain the ARGB pixel data as an int array as follows:
int w = theImage.getWidth();
int h = theImage.getHeight();
int[] argb = new int[w * h];
theImage.getRGB(argb, 0, w, 0, 0, w, h);
The int array can then be used as a parameter to Image.createRGBImage(), or in desktop Java, BufferedImage can be used as follows:
BufferedImage img = new BufferedImage(w, h, BufferedImage.TYPE_INT_ARGB);
img.setRGB(0, 0, w, h, ints, 0, w);

Resources